Over at Krebs on Security, a rare but fascinating look into the monetary and brand reputation effects a real-world breach can have on a corporation were outlined last week in the fascinating post “FDIC: 2011 FIS Breach Worse Than Reported“. The post provides an in-depth review of the impact of the 2011 breach at FIS in which FIS originally stated ““7,170 prepaid accounts may have been at risk and that three individual cardholders’ non-public information may have been disclosed as a result of the unauthorized activities” in their original filing with the SEC. The article provided two very interesting insights. First, there are truly real-word financial and brand consequences in failing to effectively implement network security controls. Kreb’s article provides an in-depth look at the results of the FDIC audits performed at FIS in 2011 and 2012 as a result of the original breach incident. What was interesting to learn is that as FIS is a service provider to banks and not actually a bank, the FDIC is unable to levy fines against it or shut it down directly. However, in May of this year, the FDIC sent the results of its audits to all of FIS’s customers, as the post highlights with a letter attached that began “We are sending you this report for your evaluation and consideration in managing your vendor relationship with FIS.” The FDIC made this decision despite the fact that FIS has spent over $100 million dollars in trying to shore up their network security controls. This will obviously have some negative brand and revenue impact for FIS as the result of the FDIC actions.
The second interesting point within the post was the details around the environment FIS was attempting to secure, and the amount of vulnerabilities they were dealing with. Portions of the FDIC report that were noted in the post showed that FIS was dealing with “approximately 30,000 servers and operating systems, another 30,000 network devices, over 40,000 workstations, 50,000 network circuits, and 28 mainframes running 80 LPARs”. The post also highlights that “The Executive Summary Scan reports from November 2012 show 18,747 network vulnerabilities and over 291 application vulnerabilities as past due”. While 18,747 vulnerabilities identified in a scan might seem like a lot, it is not uncommon in a network of this size and scope. Many FireMon customers have seen scan results with an even greater amount of identified vulnerabilities. The challenge when faced with this amount of vulnerabilities is knowing which ones truly matter. Out of 18,000+ vulnerabilities, how would you know which ones to remediate first? Attempting to manually sort through the vulnerabilities or simply patching the highest value assets doesn’t actually solve the problem. An automated, intelligent and continuous real-time assessment of the vulnerabilities that shows what assets are truly reachable over the network by an attacker, and which remediation efforts will reduce the greatest amount risk (and access) is the only way to proactively solve this problem.
Citing a report prepared for the Defense Department by the Defense Science Board, the Washington Post published an article today highlighting attacks from Chinese cyber-spies that compromised US Weapons systems designs. The Post noted that the attacks exposed “programs critical to U.S. missile defenses and combat aircraft and ships.” The article specifically noted that “the advanced Patriot missile system, known as PAC-3; an Army system for shooting down ballistic missiles, known as the Terminal High Altitude Area Defense, or THAAD; and the Navy’s Aegis ballistic-missile defense system” were compromised, as well as “vital combat aircraft and ships, including the F/A-18 fighter jet, the V-22 Osprey, the Black Hawk helicopter and the Navy’s new Littoral Combat Ship”.
The Post’s article does not specifically cover how the designs were stolen, what methods were used to attack networks, and whether these were attacks aimed at US Government networks or defense contractors, although anonymous U.S. officials cited in the article “said senior U.S. defense and diplomatic officials presented the Chinese with case studies detailing the evidence of major intrusions into U.S. companies, including defense contractors.” The article also noted that a recent National Intelligence Estimate noted that “that China was by far the most active country in stealing intellectual property from U.S. companies”. This comes on top of Mandiant’s Intelligence Center Report earlier this year detailing the activities of APT1, a China based cyber-espionage group believed to be a unit in the People’s Liberation Army (PLA).
While the Cyber-warfare term has been hyped quite extensively and sometimes disingenuously within the information security community, these reports highlight that there are certain cyber threat actors today that are actively engaged in target specific attacks to gain information from networks. Without full details of how the attacks were executed, one can only speculate that the attackers discovered exploitable vulnerabilities within the network to gain access to and ultimately extract this data. It is yet further evidence that a reactive information security stance ultimately will not protect an organization from a dedicated attacker. To truly secure our networks, we as security practitioners must proactively identify the vulnerable system(s) on our network that could lead to a breach before the attackers do, and prioritize our remediation efforts around the systems the pose the greatest risk to attack. Furthermore, to ensure ongoing security, security practitioners must be able to know in advance if proposed network or security changes will introduce or expose systems to further risk or breach from attackers and remediate these exposures before the change is committed. We have discussed this topic many times here on the FireMon blog, and pointed out that the technology to enable a Risk-based security posture is already available. While many Federal officials have called for an expedited adoption rate around a proactive risk policy, articles like the one today in the Washington Post show that those calls are not being heeded fast enough.
A Federal Times article recently noted that three former Federal IT Executives, including two high ranking IT security officials from the Office of Management and Budget (OMB), felt that government IT security was too focused on compliance and “oftentimes do not reflect their agencies’ most critical security needs”. In a new report entitled “Measuring What Matters: Reducing Risk by Rethinking How We Evaluate Cybersecurity”, the authors note that government agencies “continue to spend scarce resources on measures that do little to address the most significant cyber threats.”
The report outlines the authors proposal for a new approach to security, the Organization Cyber Risk Management Framework. This is a risk-centric security management posture that focuses on establishing a security baseline for agencies that allows them to correctly asses their risk posture based on empirical data. The authors note that in order to move to this framework, agencies must first implement automated continuous monitoring programs, which they identify as “continuous diagnostics and mitigation, configuration management, threat assessment, and remediation practices.” We at FireMon could not be more excited to see the report identify the importance of configuration management, and we have highlighted the importance of configuration management as it relates to risk on this blog previously. When discussing a risk-based approach, security practitioners tend to gravitate to threat management. Threat management is sexy; it includes attacks and attackers, and makes security practitioners feel more like MacGyver vs. Dilbert. Configuration Management on the surface seems less sexy. Getting notification that someone added a new ACL to a router doesn’t invoke images of thwarting a hackers attack. Consider the all to common scenario though where the router admin fat-fingered said ACL, and accidentally enabled access to an internal network that should not have access from the outside world. Without real-time configuration change alerting that can identify a violation of agency or corporate security policy, an attacker might end up being the one that ultimately alerts the organization to the misconfiguration.
The report is very comprehensive, and provides a very through framework for how to implement a risk based security practice. While it is clearly focused on Federal Government agency environments, it provides some good insights for corporate security practitioners as well. The report concludes that “To fix the problems of today and those of the years ahead, government should implement a more consistent method of evaluating cybersecurity threats — one which is measurable, transparent, and outcome-oriented.” It is refreshing to not only see a recommendation on moving to a risk-based security posture, but one that includes the importance of device configuration management and its importance in truly knowing your risk posture.
For those of you following the now almost daily headlines on cyber-security breaches occurring around the world, you probably saw the recent Department of Energy and Federal Reserve breaches. As Reuters noted in their article on the Federal Reserve breach, “The Federal Reserve system is aware that information was obtained by exploiting a temporary vulnerability in a website vendor product,” a Fed spokeswoman said. The Dark Reading article on the Department of Energy breach noted that the DOE planned to “implement a full remediation plan” once the full extent of the attack was known. The DOE continued by stating “The Department is also leading an aggressive effort to reduce the likelihood of these events occurring again. These efforts include leveraging the combined expertise and capabilities of the Department’s Joint Cybersecurity Coordination Center to address this incident, increasing monitoring across all of the Department’s networks and deploying specialized defense tools to protect sensitive assets.”
Both incidents reinforce the need for proactively in identifying what assets are at risk on your network versus reacting and patching after a breach has occurred. While the Department of Homeland Security announced the continuous monitoring initiative last year, the time frame for implementation clearly needs to be moved up. According to Govinfo Security, the Federal Government responded to 106,000 attacks in 2011. Clearly, the traditional approach of reacting to an attack and patching the vulnerability is not preventing future attacks.
All organizations, not just the Federal Government, need to become more proactive and find the potential exploits before the attackers do. We have discussed the need for this many times previously on this blog. Securosis continues to lead the call for more proactive solutions as well, advocating just a couple weeks ago for the merits of an Early Warning System. The technology is available now to address this need. It imperative for your network’s security to know what assets are truly at risk right now. If you don’t know the answer to that questions, chances are an attackers exploit might just answer it for you.
FireMon announced the release of Security Manager version 6.1 yesterday. We are extremely excited about the new features and functionality that are a part of this release, which further extend FireMon’s unparalleled ability to strengthen both operational effectiveness and security posture. One feature that we are particularly keen on is the new Access Path Analysis (APA). Leveraging the patent-pending FireMon behavior analysis framework, IT personnel can both proactively predict and forensically record the flow of packets through network configurations and obtain detailed path analysis – including routes, interfaces, firewall and NAT rules that a packet encounters while traversing the network. Access Path Analysis uses the behavior of normal traffic as it traverses the network to understand what vectors and/or behaviors could allow malicious traffic to find critical assets. This allows more effective risk analysis and better informed remediation activities.
The 6.1 release includes additional features, including FireMon Insight, Device Packs and a new FireMon Query Language (FMQL) API. FireMon Insight is a real-time dashboard of all your security configurations. Insight consumes the configurations of all major firewall vendors and presents data across all of them in a single, customizable dashboard. There is a critical need to transform configuration data into a usable form that can be quickly digested and acted upon. Insight enables security practitioners to quickly get the results of your queries even across hundreds of thousands of rules and millions of objects in multi-vendor environments. Turn those queries into meaningful, automatically generated security metrics in a matter of seconds. Device packs will enable FIreMon to add support for new devices quicker and not require an upgrade to Security Manager. The FMQL API will enable large organizations with a development staff or managed service providers to pull FireMon data and analysis into other systems. You can learn about all of these new features here, and read what Dark Reading wrote about the release as well.
Gartner recently published a new research note, “One Brand of Firewall Is a Best Practice for Most Enterprises”, that advocates exactly what the title states: that consolidating to one brand of Firewall is a best practice for most enterprises. This is a position that certainly all of the major firewall vendors will welcome, and one that will likely stir-up a significant amount of debate. Deploying two different firewall vendors has long been an accepted best practice within network security, and the majority of customers we have interfaced with here at FireMon certainly follow that model. While I will leave the debate on this to other blogs, one statistic that was noted in the research note caught my eye.
Gartner noted in the research note that ” Through 2018, more than 95% of firewall breaches will be caused by firewall misconfigurations, not firewall flaws.” We at FireMon could not agree more. Previously on this blog, we have highlighted that configuration mistakes not only represent the risk of impacting network performance, but more importantly represent a potential risk to the network. Firewall admins could potentially (and have done so in many environments) stop a revenue generating service from working properly by incorrectly configuring a firewall policy. The bigger threat though is the potential of allowing connectivity from outside of the network to a portion of the internal network that should not have access. As Gartner noted in the research note, “Diligence in patching firewalls, monitoring configuration and assessing the rule base is required to maintain security.” Misconfiguration of your firewall policy is a serious security threat, and regardless of your opinion on the one firewall vendor versus two firewall vendor debate, a tool that automates the monitoring, configuration and assessment of your firewall rulebase is required to maintain effective network security. FireMon Security Manager can provide that automation for you, and help to eliminate your firewall’s greatest threat.
Richard Stiennon recently posted an article on Network World discussing why risk management fails in IT. Mr. Stiennon posits that risk management is a carry-over from the bigger world of business, and does not work in the infosecurity world. Stiennon identifies 4 key points to try and defend his position: 1. It is expensive and almost impossible to identify all IT assets 2. It is impossible to assign value to IT assets 3. Risk management methods invariably fail to predict the actual disasters 4. Risk management devolves to “protect everything.” He finishes his article by stating that we need to move to “threat management” as opposed to risk management.
Lets address each of Stiennon’s points. Stiennon’s argument that it is impossible to identify all IT assets is in fact wrong. The fact is that there are tools in existence today that can automate the identification of all assets within organizations, such as Insightix from our partner McAfee. It is also not impossible to assign value to IT assets. The FAIR framework has provided a comprehensive guide to assigning value to IT assets within the framework of Risk Management for years. At just a basic level, most organizations can at least identify what the most valuable assets are (where the finance information is, where the intellectual property resides, etc.) and devise a ranking or value system around that. It is also not true that risk management fails to predict the actual disasters. Many companies provide software solutions that automates the analysis of your network, and identifies exactly what assets are truly at risk, including our own Risk Analyzer. Finally, most security practitioners would say that their job is in fact to protect everything within their network environment. I have yet to meet a security professional who talks about the assets they are just writing off and not worrying about protecting.
Furthermore, Stiennon’s position assumes that there is some fundamental or significant difference between “threat management” and “risk management”. Websters defines threat as “an indication of something impending, an expression of intention to inflict evil, injury or damage”, and defines risk as “the possibility of loss or injury; someone or something that creates or suggests a hazard.” I would argue that these are terms that are more similar than opposite in nature. Unfortunately Stiennon doesn’t elaborate on what “threat management” is beyond a link to an article on UTM appliances.
Risk Management is indeed a challenging practice to implement within an IT organization. In large enterprise and service provider environments, it is truly a huge undertaking. However, it is not so difficult that it can’t be done, or be effective, and therefore I have to respectfully disagree with Stiennon’s position. Here at FireMon, we have had a series of posts around how to effectively operationalize and automate risk management within your everyday IT security operations leveraging the real-time Security Manager and Risk Analyzer solution. Securosis has an amazing whitepaper discussing vulnerability management platforms aimed at effective risk management within IT, and SIRA offers insights and guidance how to achieve this daily. Risk Management is an effective, necessary and crucial part of any organizations IT Security operation, and the reports of it’s untimely death are greatly exaggerated.
Last week I spoke at the United Security Summit about operationalizing risk into everyday security operations (and had some fun with song parody titles along the way as evidenced by the photo attached to this post). The talk focused on the different elements required to answer the only question that really matters: what assets are truly at risk in your network right now? One of those elements that I highlighted was configuration management.
Configuration Management has traditionally been pitched as a tool that can help eliminate mistakes and downtime within your network. That certainly is one of the benefits that configuration tools provide. However, I would argue that configuration tools are a risk management tool, particularly on the network and network security side of the house. If a router admin adds an ACL that suddenly opens access to an internal network from outside networks, that is a huge risk to the network. If a firewall admin mistakenly pushes an overly permissive policy that permits any source and service to an internal network, you need to be alerted to the risk. As I noted in my talk, ideally, your configuration tool also inter-operates with your visual attack tool, and updates the attack topology continuously and in real-time as these changes are made to the network and network security devices in your environment.
I also noted that there are others doing great work around this idea of operationalizing risk, or building a risk platform. Securosis has an amazing white paper discussing building a vulnerability management platform, and all of the elements needed to truly address risk in your environment. As they note in their paper, “There really shouldn’t be a distinction between scanning for a vulnerability and checking for a bad configuration. Either situation provide an opportunity for compromise.” Don’t open up your environment to potential compromise; be sure to include device configuration management as part of your day to day risk operations.
If you left your car unlocked with valuables visible in the front seat, would you blame the car manufacturer if someone stole those items? I doubt you would and I seriously doubt anyone would listen if you tried. But a recent US Federal Court of Appeals ruling in the case of Patco Construction. v. People’s United Bank might indicate that yes the car manufacturer, or in this case a bank, is liable.
The case revolves around the plaintiff whose credentials and account information were compromised. The cyber thieves were then able to login to the bank’s site and initiate several transfers totaling more than half a million dollars. When the fraud was reported the bank was able to recover about half of the stolen funds, but they refused to refund Patco the rest of their stolen money.
In this case the bank’s position was that Patco was negligent enough to have their passwords, account info and usernames stolen. Why should the bank bear the cost of Patco’s mistake? The bank said that Patco should be responsible for its own losses. It is hard to argue against the Bank’s position. Why should they bear the loss, when they had nothing to do with Patco exposing their credentials?
But Patco is understandably upset given the size of the loss and they have a valid concern that the Bank’s own internal systems flagged the transactions as suspect and yet didn’t stop them. Patco’s argument is that regardless of why or how the fraud was initiated, the bank must provide commercially reasonable controls to prevent fraud. The court was persuaded by this argument.
For now, the case has been sent back down for adjudication and suggested to both parties that they try to settle this before a verdict. But the case itself brings up a bigger issue:
When does an organization or individual have to take responsibility for its actions on security? If the bank were held to be liable in the Patco case what message does that send? One could say that the message is don’t worry too much about your online banking credentials because at the end of the day if anything bad happens the bank is liable anyway. I don’t think that is the message we should be sending. How can we expect banks to take this exposure on without figuring that risk into the fee equation?
No doubt there are many instances of negligence and poor security where consumers should hold the failing institution liable for loss of money or information. But there must be some shared responsibility of security.
Every organization, regardless of size, has limited resources when trying to address the security of their network. Whether you work in a large Fortune 500 environment, or a small business, the limitations of the resources allocated to security require you to make some tough decisions about what you will or won’t do when it comes to securing the organization. Here at FireMon, we believe there is really only one question that matters when prioritizing what to do when if comes to securing your network: What assets are truly at risk?
As Securosis pointed out in their excellent Vulnerability Evolution Management white paper earlier this year, organizations “ need the ability to analyze threat-related data, combine it with an understanding of what is vulnerable, and provide visibility to what is meaningfully at risk.” When trying to address the risk to their environment, most organizations have relied on the vulnerability scanner. Vulnerability Scanners are extremely effective at their job, and are the core component to being able to identify vulnerabilities within your network. Simply running a vulnerability scanner by itself though, and then deciding which of the hundreds, thousands or tens-of-thousands of vulnerabilities should be patched is not enough. Without a knowledge of the network topology and the mitigating security controls that are in place, the vulnerability scan results are just another list of things to get to at some point when trying to prioritize your network security activities.
Fortunately, we have done a lot of work in developing a tool that understands what assets are truly vulnerable on your network. FireMon Security Manager with the patented Risk Analyzer add-on enables you to visually see exactly what assets are meaningfully at risk. Our partnership with Rapid7 and the integration of Metasploit with Risk Analyzer takes this understanding to an even deeper level, allowing you to prioritize what assets are not only vulnerable, but what assets can have exploit code executed on them by an attacker. You can learn more about this enhanced integration in a joint on-demand webinar we did recently with Rapid7 here. FireMon will also be highlighting the importance of operationalizing risk on day 2 the 2012 United Security Summit as well. We hope to see you there.