Posts Tagged ‘Risk’
Over at Krebs on Security, a rare but fascinating look into the monetary and brand reputation effects a real-world breach can have on a corporation were outlined last week in the fascinating post “FDIC: 2011 FIS Breach Worse Than Reported“. The post provides an in-depth review of the impact of the 2011 breach at FIS in which FIS originally stated ““7,170 prepaid accounts may have been at risk and that three individual cardholders’ non-public information may have been disclosed as a result of the unauthorized activities” in their original filing with the SEC. The article provided two very interesting insights. First, there are truly real-word financial and brand consequences in failing to effectively implement network security controls. Kreb’s article provides an in-depth look at the results of the FDIC audits performed at FIS in 2011 and 2012 as a result of the original breach incident. What was interesting to learn is that as FIS is a service provider to banks and not actually a bank, the FDIC is unable to levy fines against it or shut it down directly. However, in May of this year, the FDIC sent the results of its audits to all of FIS’s customers, as the post highlights with a letter attached that began “We are sending you this report for your evaluation and consideration in managing your vendor relationship with FIS.” The FDIC made this decision despite the fact that FIS has spent over $100 million dollars in trying to shore up their network security controls. This will obviously have some negative brand and revenue impact for FIS as the result of the FDIC actions.
The second interesting point within the post was the details around the environment FIS was attempting to secure, and the amount of vulnerabilities they were dealing with. Portions of the FDIC report that were noted in the post showed that FIS was dealing with “approximately 30,000 servers and operating systems, another 30,000 network devices, over 40,000 workstations, 50,000 network circuits, and 28 mainframes running 80 LPARs”. The post also highlights that “The Executive Summary Scan reports from November 2012 show 18,747 network vulnerabilities and over 291 application vulnerabilities as past due”. While 18,747 vulnerabilities identified in a scan might seem like a lot, it is not uncommon in a network of this size and scope. Many FireMon customers have seen scan results with an even greater amount of identified vulnerabilities. The challenge when faced with this amount of vulnerabilities is knowing which ones truly matter. Out of 18,000+ vulnerabilities, how would you know which ones to remediate first? Attempting to manually sort through the vulnerabilities or simply patching the highest value assets doesn’t actually solve the problem. An automated, intelligent and continuous real-time assessment of the vulnerabilities that shows what assets are truly reachable over the network by an attacker, and which remediation efforts will reduce the greatest amount risk (and access) is the only way to proactively solve this problem.
A Federal Times article recently noted that three former Federal IT Executives, including two high ranking IT security officials from the Office of Management and Budget (OMB), felt that government IT security was too focused on compliance and “oftentimes do not reflect their agencies’ most critical security needs”. In a new report entitled “Measuring What Matters: Reducing Risk by Rethinking How We Evaluate Cybersecurity”, the authors note that government agencies “continue to spend scarce resources on measures that do little to address the most significant cyber threats.”
The report outlines the authors proposal for a new approach to security, the Organization Cyber Risk Management Framework. This is a risk-centric security management posture that focuses on establishing a security baseline for agencies that allows them to correctly asses their risk posture based on empirical data. The authors note that in order to move to this framework, agencies must first implement automated continuous monitoring programs, which they identify as “continuous diagnostics and mitigation, configuration management, threat assessment, and remediation practices.” We at FireMon could not be more excited to see the report identify the importance of configuration management, and we have highlighted the importance of configuration management as it relates to risk on this blog previously. When discussing a risk-based approach, security practitioners tend to gravitate to threat management. Threat management is sexy; it includes attacks and attackers, and makes security practitioners feel more like MacGyver vs. Dilbert. Configuration Management on the surface seems less sexy. Getting notification that someone added a new ACL to a router doesn’t invoke images of thwarting a hackers attack. Consider the all to common scenario though where the router admin fat-fingered said ACL, and accidentally enabled access to an internal network that should not have access from the outside world. Without real-time configuration change alerting that can identify a violation of agency or corporate security policy, an attacker might end up being the one that ultimately alerts the organization to the misconfiguration.
The report is very comprehensive, and provides a very through framework for how to implement a risk based security practice. While it is clearly focused on Federal Government agency environments, it provides some good insights for corporate security practitioners as well. The report concludes that “To fix the problems of today and those of the years ahead, government should implement a more consistent method of evaluating cybersecurity threats — one which is measurable, transparent, and outcome-oriented.” It is refreshing to not only see a recommendation on moving to a risk-based security posture, but one that includes the importance of device configuration management and its importance in truly knowing your risk posture.
For those of you following the now almost daily headlines on cyber-security breaches occurring around the world, you probably saw the recent Department of Energy and Federal Reserve breaches. As Reuters noted in their article on the Federal Reserve breach, “The Federal Reserve system is aware that information was obtained by exploiting a temporary vulnerability in a website vendor product,” a Fed spokeswoman said. The Dark Reading article on the Department of Energy breach noted that the DOE planned to “implement a full remediation plan” once the full extent of the attack was known. The DOE continued by stating “The Department is also leading an aggressive effort to reduce the likelihood of these events occurring again. These efforts include leveraging the combined expertise and capabilities of the Department’s Joint Cybersecurity Coordination Center to address this incident, increasing monitoring across all of the Department’s networks and deploying specialized defense tools to protect sensitive assets.”
Both incidents reinforce the need for proactively in identifying what assets are at risk on your network versus reacting and patching after a breach has occurred. While the Department of Homeland Security announced the continuous monitoring initiative last year, the time frame for implementation clearly needs to be moved up. According to Govinfo Security, the Federal Government responded to 106,000 attacks in 2011. Clearly, the traditional approach of reacting to an attack and patching the vulnerability is not preventing future attacks.
All organizations, not just the Federal Government, need to become more proactive and find the potential exploits before the attackers do. We have discussed the need for this many times previously on this blog. Securosis continues to lead the call for more proactive solutions as well, advocating just a couple weeks ago for the merits of an Early Warning System. The technology is available now to address this need. It imperative for your network’s security to know what assets are truly at risk right now. If you don’t know the answer to that questions, chances are an attackers exploit might just answer it for you.
Richard Stiennon recently posted an article on Network World discussing why risk management fails in IT. Mr. Stiennon posits that risk management is a carry-over from the bigger world of business, and does not work in the infosecurity world. Stiennon identifies 4 key points to try and defend his position: 1. It is expensive and almost impossible to identify all IT assets 2. It is impossible to assign value to IT assets 3. Risk management methods invariably fail to predict the actual disasters 4. Risk management devolves to “protect everything.” He finishes his article by stating that we need to move to “threat management” as opposed to risk management.
Lets address each of Stiennon’s points. Stiennon’s argument that it is impossible to identify all IT assets is in fact wrong. The fact is that there are tools in existence today that can automate the identification of all assets within organizations, such as Insightix from our partner McAfee. It is also not impossible to assign value to IT assets. The FAIR framework has provided a comprehensive guide to assigning value to IT assets within the framework of Risk Management for years. At just a basic level, most organizations can at least identify what the most valuable assets are (where the finance information is, where the intellectual property resides, etc.) and devise a ranking or value system around that. It is also not true that risk management fails to predict the actual disasters. Many companies provide software solutions that automates the analysis of your network, and identifies exactly what assets are truly at risk, including our own Risk Analyzer. Finally, most security practitioners would say that their job is in fact to protect everything within their network environment. I have yet to meet a security professional who talks about the assets they are just writing off and not worrying about protecting.
Furthermore, Stiennon’s position assumes that there is some fundamental or significant difference between “threat management” and “risk management”. Websters defines threat as “an indication of something impending, an expression of intention to inflict evil, injury or damage”, and defines risk as “the possibility of loss or injury; someone or something that creates or suggests a hazard.” I would argue that these are terms that are more similar than opposite in nature. Unfortunately Stiennon doesn’t elaborate on what “threat management” is beyond a link to an article on UTM appliances.
Risk Management is indeed a challenging practice to implement within an IT organization. In large enterprise and service provider environments, it is truly a huge undertaking. However, it is not so difficult that it can’t be done, or be effective, and therefore I have to respectfully disagree with Stiennon’s position. Here at FireMon, we have had a series of posts around how to effectively operationalize and automate risk management within your everyday IT security operations leveraging the real-time Security Manager and Risk Analyzer solution. Securosis has an amazing whitepaper discussing vulnerability management platforms aimed at effective risk management within IT, and SIRA offers insights and guidance how to achieve this daily. Risk Management is an effective, necessary and crucial part of any organizations IT Security operation, and the reports of it’s untimely death are greatly exaggerated.
Every organization, regardless of size, has limited resources when trying to address the security of their network. Whether you work in a large Fortune 500 environment, or a small business, the limitations of the resources allocated to security require you to make some tough decisions about what you will or won’t do when it comes to securing the organization. Here at FireMon, we believe there is really only one question that matters when prioritizing what to do when if comes to securing your network: What assets are truly at risk?
As Securosis pointed out in their excellent Vulnerability Evolution Management white paper earlier this year, organizations “ need the ability to analyze threat-related data, combine it with an understanding of what is vulnerable, and provide visibility to what is meaningfully at risk.” When trying to address the risk to their environment, most organizations have relied on the vulnerability scanner. Vulnerability Scanners are extremely effective at their job, and are the core component to being able to identify vulnerabilities within your network. Simply running a vulnerability scanner by itself though, and then deciding which of the hundreds, thousands or tens-of-thousands of vulnerabilities should be patched is not enough. Without a knowledge of the network topology and the mitigating security controls that are in place, the vulnerability scan results are just another list of things to get to at some point when trying to prioritize your network security activities.
Fortunately, we have done a lot of work in developing a tool that understands what assets are truly vulnerable on your network. FireMon Security Manager with the patented Risk Analyzer add-on enables you to visually see exactly what assets are meaningfully at risk. Our partnership with Rapid7 and the integration of Metasploit with Risk Analyzer takes this understanding to an even deeper level, allowing you to prioritize what assets are not only vulnerable, but what assets can have exploit code executed on them by an attacker. You can learn more about this enhanced integration in a joint on-demand webinar we did recently with Rapid7 here. FireMon will also be highlighting the importance of operationalizing risk on day 2 the 2012 United Security Summit as well. We hope to see you there.
Yet another systems breach was reported last week, this time at the University of North Florida affecting 23,000+ students. This in and of itself is unfortunately nothing new, as we have been inundated weekly with reports of breeches occurring at organizations throughout the last 18 months. What struck a chord however with this incident at UNF is that it is not the first time that the college had experienced data loss from an external attacker. In October of 2010, the school was also attacked by an external hacker, and 107,000 students were affected in that incident. UNF has posted an FAQ on the latest attack here. One of the more interesting questions is what is the university doing to make sure this doesn’t happen again, with the school providing the following answer: “The method used by the intruder to gain access has been identified and steps have already been taken to prevent a reoccurrence. The University Police Department, in conjunction with Housing and ITS, is investigating this incident.”
Considering this is the second time the school has been attacked, one can imagine this response wasn’t too reassuring to the students. The incident also shows that the traditional reactive approach to security needs to be replaced by a proactive, risk-based approach. After the first incident in 2010, the school stated that “The university shut down the compromised server and has taken other precautions to prevent future incidents.” One can only assume that the specific exploit on the specific server that was compromised was patched against, or maybe a specific service blocked on the firewall. Reacting to that specific threat and assuming that the remediation actions taken protected the school moving forward clearly was not the most comprehensive approach to protect against future threats.
The most successful organizations that combat risk today “have a much better handle controlling what is deployed on their networks and whether these assets are vulnerable to imminent threats” as Jon Oltsik noted earlier this month on his blog. He also pointed out though that only 20% of organizations today have a risk management plan in place that includes some form of threat intelligence. FireMon has always believed it is important to proactively identify areas of Risk, whether they come from adding a rule to your firewall that inadvertently introduces risk by being overly permissive, or by identifying in real-time what assets on your network are most vulnerable to exploitation. With the release of Security Manager 6.0 with Risk Analyzer add-on, organizations now have a complete Security Posture Management tool that provides unparalleled visibility to understand the scope of business vulnerability and prioritize the proactive defense of critical assets, while maintaining a high confidence that their security infrastructure is free of human error or incompatibilities between policies and protection. Avoid having to post a breach FAQ; adopt a proactive risk based approach to security management today.
SANS recently published their Analyst Program survey on log and event management. Report author Jerry Shank noted many interesting facts within the paper. Specifically, he highlighted that “The data suggests that respondents are having difficulty separating normal traffic from suspicious traffic,” and that security practitioners “need advanced correlation and analysis capabilities to shut out the noise and get the actionable information they need.” Despite the ever evolving threat landscape, as noted in the latest Symantec Threat report, there was another telling statement within SANS report: “A large percentage of organizations—22 percent of the respondents —say they have little or no automation and no plans to change. The most common reasons given for not automating include lack of time and money… resources that are closely intertwined.”
Log Analysis is certainly a key component within any organizations security practice. Maintaining logs can help with the forensic analysis when reviewing a breach, and can help to identify baselines to hopefully note when anomalies are occurring. The statistics around actual time spent analyzing logs for attacks was extremely telling though. When the IT professionals were asked how much time they normally spend on log-data analysis, the largest group (35%) replied, “none to a few hours per week.” As for the rest, 18% didn’t know, 11% said one day per week, 2% outsourced this task to a managed security service provider, and 24% defined it as “integrated into normal workflow.” The SANS survey report, which notes analysis time overall actually seems down from last year, noted that about 50% of the smaller organizations spent zero to just a few hours analyzing logs.
These statistics show that log analysis is a difficult and time consuming process that even the largest organizations are struggling to integrate into the everyday operations of security, much less smaller organizations with limited security staffs. That is why we at FireMon believe it is vital to augment SIEM products with a tool that can operationalize the identification of risk to the network in real-time. The tool should automate the identification process of assets that can be compromised, and be simple and easy to deploy for any organization regardless of size. Risk Analyzer is just such a product. It automates the identification of assets at risk in your network, and provides a prioritized list of actions that will reduce the greatest amount of risk with the least amount of effort. This tool is valuable in any organization of any size. As the SANS report notes in its conclusion, “the issue has been getting usable and actionable information out of the data when they need it for detection and response.” Risk Analyzer does exactly that; it provides actionable information that will reduce the risk to your network.
Many in the security field have been following the story of the Global Payments breach this week. First reported by Brian Krebs on his award winning security blog, he has continued to follow the story as more details have been uncovered day by day. As many outlets reported, due to the breach, Visa removed Global Payments from its list of preferred vendors. Global Payments can still process transactions, but at a significantly higher fee. The company’s stock dropped 9% the day of the breach before trading was halted, and has continued to drop after trading resumed on Monday. It is also expected that Global Payments will have to dip into its cash reserves of $300-400 million to cover the loss associated with the breach.
The negative financial blows to Global Payments noted above highlight the significant impact a security breach can have on a company today. Gone are the days when security vendors warned of the potential impact a nefarious hacker might have on your network, hoping to play the fear card in order to gain a sale. The threats from multinational criminal and state sponsored hacker groups is now very real, and these threats can inflict significant financial and public relations damage to your organization. With the spate of attacks and breaches that have been covered in the last year, security is finally starting to be a topic focused on in the executive suite, with many leaders struggling to determine how to communicate the state of security effectively.
Global Payments issued a statement on the breach, which included the following statement from their CEO: “It is reassuring that our security processes detected an intrusion.” However, in Krebs latest update to the story, he notes that the New York Times reported that Global Payments was breached in early 2011. One of Krebs hacker sources also shared similar information, saying “the company’s <Global Payments> network was under full criminal control from that time until March 26, 2012.” Global Payments stock has been negatively affected, their fees to do business with Visa have significantly increased, and they have a large payout from their cash reserves looming to both Visa and MasterCard to cover the card holder losses because of this breach . In light of those facts, it is surprising to hear their CEO is reassured they discovered the intrusion after the fact.
Breaches like Global Payments, as well as the numerous events that were highlighted in 2011, show that the reactionary approach that has been taken within the security world is not adequate to protect companies from the negative financial impacts a breach can inflict. Companies need to operationalize risk within their day to day security activities, and reduce the danger to their networks by making threats and vulnerabilities visible and actionable. This enables organizations to prioritize and address high-risk security vulnerabilities before breaches occur. FireMon’s Risk Analyzer, now integrated into Security Manager with the 6.0 release, automates the identification of what assets are vulnerable within a network, and prioritizes what actions will reduce the greatest amount of risk with the least amount of effort. Risk Analyzer moves security from a reactionary exercise to a proactive approach that allows you to fix your vulnerable assets before they can be exploited. As this latest breach exposes, not operationalizing risk within your security organization can be a costly decision.
At the RSA Convention yesterday, our President Jody Brazil moderated a fascinating panel discussion on the state of the firewall and whether it would remain a relevant tool with the increase in virtualization and cloud adoption. The panel featured Chris Hoff, Chief Security Architect at Juniper Networks, Manny Rivelo, EVP of Security at F5 and Vik Phatak, CTO of NSS Labs. OVer 600 people attended this lively session. All of the panelists agreed that firewall’s will remain a relevant security tool, but Cloud and virtualization will provide an opportunity to develop new ways to deploy the firewall. The panelists all agreed that the firewall will evolve to be a service that is delivered within the Cloud or virtualized environment, and will ultimately move from CLI’s and GUI’s to API’s.
The discussion also touched on security in general within these new network paradigms, and the panelists were asked to identify 1 or 2 key points the attendees should consider when they returned to their own networks after RSA. Chris Hoff stated that as security practitioners, we need to move from managing vulnerabilities and reacting to incidents to managing risk. Operationalizing risk is the key to effectively reducing and remediating risk within your environment. At FireMon, we couldn’t agree more. Our Risk Analyzer product enables you to manage risk in real-time on your network, and proactively eliminate potential vulnerabilities before an attacker can exploit them. This tool will allow you to improve your risk posture over time, and demonstrate the effectiveness of the security controls you have deployed within your environment. As we noted previously, Risk’s time is now.
In our first post on accurately measuring & scoring risk, we examined the holistic network approach many enterprises take around managing risk. This approach is to run vulnerability scanners against parts of their network or the network in its entirety at some predetermined interval. In both cases, scans are run, vulnerabilities are identified and possibly prioritized based on asset value, patching activities are scheduled over the next month or quarter, and the event repeats itself. As we noted, this approach over-simplifies the complex task of risk, as different threats and different assets define different risks.
The answer to this dynamic risk challenge is clear. Organizations need to operationalize risk into their daily security activities, and not make risk management simply a set event that occurs at predetermined intervals. As changes occur to the organizations risk posture based off of the business activities noted in our last post, or larger corporate events such as M&A or moving to the cloud, security organizations need to be able to dynamically and easily analyze this change to their risk posture in real time. To effectively do so, a tool that provides the ability to create different risk scenarios is required. Scenarios enable an organization to address each different threat to their assets as changes occur.
In the previous post, we provided the example of a business unit requesting VPN access to a new business partner after the predetermined scan had already been run. Leveraging a tool that provides the ability to create different risk scenarios, the security team would be able to create a new scenario to identify the new connectivity from the business partner into their network. To truly be effective, the tool would not only need to be able to identify this new connection, but have the contextual awareness of the firewall policy, network topology and any other network security devices that might be traversed between the front and back end systems involved in this new connectivity to accurately identify any potential vulnerabilities that are introduced from this new partnership.
FireMon Risk Analyzer is just that tool. Risk Analyzer enables administrators to create different scenarios: VPN connectivity to new business partners, connectivity to a cloud provider, a new data center coming online. Combined with Risk Analyzer’s full network topology and security policy awareness (which can be continually updated in real time via FireMon Security Manager), end users are able to identify new risk scenarios, proactively identify the new risk introduced from the scenario, and virtually apply remediation to ensure that the most effective remediation is completed with the least amount of effort. Multiple scenarios can be created as different threats or business events are identified, and as changes occur to the configuration or connectivity within the scenarios, end users can easily and immediately re-run the scenario within Risk Analyzer to asses how these changes affect the true risk posture of the organization. Risk Scenarios enable organizations to achieve the goal of operationalizing risk into their everyday activity.