Posts Tagged ‘Network Risk Analysis’
Over at Krebs on Security, a rare but fascinating look into the monetary and brand reputation effects a real-world breach can have on a corporation were outlined last week in the fascinating post “FDIC: 2011 FIS Breach Worse Than Reported“. The post provides an in-depth review of the impact of the 2011 breach at FIS in which FIS originally stated ““7,170 prepaid accounts may have been at risk and that three individual cardholders’ non-public information may have been disclosed as a result of the unauthorized activities” in their original filing with the SEC. The article provided two very interesting insights. First, there are truly real-word financial and brand consequences in failing to effectively implement network security controls. Kreb’s article provides an in-depth look at the results of the FDIC audits performed at FIS in 2011 and 2012 as a result of the original breach incident. What was interesting to learn is that as FIS is a service provider to banks and not actually a bank, the FDIC is unable to levy fines against it or shut it down directly. However, in May of this year, the FDIC sent the results of its audits to all of FIS’s customers, as the post highlights with a letter attached that began “We are sending you this report for your evaluation and consideration in managing your vendor relationship with FIS.” The FDIC made this decision despite the fact that FIS has spent over $100 million dollars in trying to shore up their network security controls. This will obviously have some negative brand and revenue impact for FIS as the result of the FDIC actions.
The second interesting point within the post was the details around the environment FIS was attempting to secure, and the amount of vulnerabilities they were dealing with. Portions of the FDIC report that were noted in the post showed that FIS was dealing with “approximately 30,000 servers and operating systems, another 30,000 network devices, over 40,000 workstations, 50,000 network circuits, and 28 mainframes running 80 LPARs”. The post also highlights that “The Executive Summary Scan reports from November 2012 show 18,747 network vulnerabilities and over 291 application vulnerabilities as past due”. While 18,747 vulnerabilities identified in a scan might seem like a lot, it is not uncommon in a network of this size and scope. Many FireMon customers have seen scan results with an even greater amount of identified vulnerabilities. The challenge when faced with this amount of vulnerabilities is knowing which ones truly matter. Out of 18,000+ vulnerabilities, how would you know which ones to remediate first? Attempting to manually sort through the vulnerabilities or simply patching the highest value assets doesn’t actually solve the problem. An automated, intelligent and continuous real-time assessment of the vulnerabilities that shows what assets are truly reachable over the network by an attacker, and which remediation efforts will reduce the greatest amount risk (and access) is the only way to proactively solve this problem.
Citing a report prepared for the Defense Department by the Defense Science Board, the Washington Post published an article today highlighting attacks from Chinese cyber-spies that compromised US Weapons systems designs. The Post noted that the attacks exposed “programs critical to U.S. missile defenses and combat aircraft and ships.” The article specifically noted that “the advanced Patriot missile system, known as PAC-3; an Army system for shooting down ballistic missiles, known as the Terminal High Altitude Area Defense, or THAAD; and the Navy’s Aegis ballistic-missile defense system” were compromised, as well as “vital combat aircraft and ships, including the F/A-18 fighter jet, the V-22 Osprey, the Black Hawk helicopter and the Navy’s new Littoral Combat Ship”.
The Post’s article does not specifically cover how the designs were stolen, what methods were used to attack networks, and whether these were attacks aimed at US Government networks or defense contractors, although anonymous U.S. officials cited in the article “said senior U.S. defense and diplomatic officials presented the Chinese with case studies detailing the evidence of major intrusions into U.S. companies, including defense contractors.” The article also noted that a recent National Intelligence Estimate noted that “that China was by far the most active country in stealing intellectual property from U.S. companies”. This comes on top of Mandiant’s Intelligence Center Report earlier this year detailing the activities of APT1, a China based cyber-espionage group believed to be a unit in the People’s Liberation Army (PLA).
While the Cyber-warfare term has been hyped quite extensively and sometimes disingenuously within the information security community, these reports highlight that there are certain cyber threat actors today that are actively engaged in target specific attacks to gain information from networks. Without full details of how the attacks were executed, one can only speculate that the attackers discovered exploitable vulnerabilities within the network to gain access to and ultimately extract this data. It is yet further evidence that a reactive information security stance ultimately will not protect an organization from a dedicated attacker. To truly secure our networks, we as security practitioners must proactively identify the vulnerable system(s) on our network that could lead to a breach before the attackers do, and prioritize our remediation efforts around the systems the pose the greatest risk to attack. Furthermore, to ensure ongoing security, security practitioners must be able to know in advance if proposed network or security changes will introduce or expose systems to further risk or breach from attackers and remediate these exposures before the change is committed. We have discussed this topic many times here on the FireMon blog, and pointed out that the technology to enable a Risk-based security posture is already available. While many Federal officials have called for an expedited adoption rate around a proactive risk policy, articles like the one today in the Washington Post show that those calls are not being heeded fast enough.
A Federal Times article recently noted that three former Federal IT Executives, including two high ranking IT security officials from the Office of Management and Budget (OMB), felt that government IT security was too focused on compliance and “oftentimes do not reflect their agencies’ most critical security needs”. In a new report entitled “Measuring What Matters: Reducing Risk by Rethinking How We Evaluate Cybersecurity”, the authors note that government agencies “continue to spend scarce resources on measures that do little to address the most significant cyber threats.”
The report outlines the authors proposal for a new approach to security, the Organization Cyber Risk Management Framework. This is a risk-centric security management posture that focuses on establishing a security baseline for agencies that allows them to correctly asses their risk posture based on empirical data. The authors note that in order to move to this framework, agencies must first implement automated continuous monitoring programs, which they identify as “continuous diagnostics and mitigation, configuration management, threat assessment, and remediation practices.” We at FireMon could not be more excited to see the report identify the importance of configuration management, and we have highlighted the importance of configuration management as it relates to risk on this blog previously. When discussing a risk-based approach, security practitioners tend to gravitate to threat management. Threat management is sexy; it includes attacks and attackers, and makes security practitioners feel more like MacGyver vs. Dilbert. Configuration Management on the surface seems less sexy. Getting notification that someone added a new ACL to a router doesn’t invoke images of thwarting a hackers attack. Consider the all to common scenario though where the router admin fat-fingered said ACL, and accidentally enabled access to an internal network that should not have access from the outside world. Without real-time configuration change alerting that can identify a violation of agency or corporate security policy, an attacker might end up being the one that ultimately alerts the organization to the misconfiguration.
The report is very comprehensive, and provides a very through framework for how to implement a risk based security practice. While it is clearly focused on Federal Government agency environments, it provides some good insights for corporate security practitioners as well. The report concludes that “To fix the problems of today and those of the years ahead, government should implement a more consistent method of evaluating cybersecurity threats — one which is measurable, transparent, and outcome-oriented.” It is refreshing to not only see a recommendation on moving to a risk-based security posture, but one that includes the importance of device configuration management and its importance in truly knowing your risk posture.
Richard Stiennon recently posted an article on Network World discussing why risk management fails in IT. Mr. Stiennon posits that risk management is a carry-over from the bigger world of business, and does not work in the infosecurity world. Stiennon identifies 4 key points to try and defend his position: 1. It is expensive and almost impossible to identify all IT assets 2. It is impossible to assign value to IT assets 3. Risk management methods invariably fail to predict the actual disasters 4. Risk management devolves to “protect everything.” He finishes his article by stating that we need to move to “threat management” as opposed to risk management.
Lets address each of Stiennon’s points. Stiennon’s argument that it is impossible to identify all IT assets is in fact wrong. The fact is that there are tools in existence today that can automate the identification of all assets within organizations, such as Insightix from our partner McAfee. It is also not impossible to assign value to IT assets. The FAIR framework has provided a comprehensive guide to assigning value to IT assets within the framework of Risk Management for years. At just a basic level, most organizations can at least identify what the most valuable assets are (where the finance information is, where the intellectual property resides, etc.) and devise a ranking or value system around that. It is also not true that risk management fails to predict the actual disasters. Many companies provide software solutions that automates the analysis of your network, and identifies exactly what assets are truly at risk, including our own Risk Analyzer. Finally, most security practitioners would say that their job is in fact to protect everything within their network environment. I have yet to meet a security professional who talks about the assets they are just writing off and not worrying about protecting.
Furthermore, Stiennon’s position assumes that there is some fundamental or significant difference between “threat management” and “risk management”. Websters defines threat as “an indication of something impending, an expression of intention to inflict evil, injury or damage”, and defines risk as “the possibility of loss or injury; someone or something that creates or suggests a hazard.” I would argue that these are terms that are more similar than opposite in nature. Unfortunately Stiennon doesn’t elaborate on what “threat management” is beyond a link to an article on UTM appliances.
Risk Management is indeed a challenging practice to implement within an IT organization. In large enterprise and service provider environments, it is truly a huge undertaking. However, it is not so difficult that it can’t be done, or be effective, and therefore I have to respectfully disagree with Stiennon’s position. Here at FireMon, we have had a series of posts around how to effectively operationalize and automate risk management within your everyday IT security operations leveraging the real-time Security Manager and Risk Analyzer solution. Securosis has an amazing whitepaper discussing vulnerability management platforms aimed at effective risk management within IT, and SIRA offers insights and guidance how to achieve this daily. Risk Management is an effective, necessary and crucial part of any organizations IT Security operation, and the reports of it’s untimely death are greatly exaggerated.
Yet another systems breach was reported last week, this time at the University of North Florida affecting 23,000+ students. This in and of itself is unfortunately nothing new, as we have been inundated weekly with reports of breeches occurring at organizations throughout the last 18 months. What struck a chord however with this incident at UNF is that it is not the first time that the college had experienced data loss from an external attacker. In October of 2010, the school was also attacked by an external hacker, and 107,000 students were affected in that incident. UNF has posted an FAQ on the latest attack here. One of the more interesting questions is what is the university doing to make sure this doesn’t happen again, with the school providing the following answer: “The method used by the intruder to gain access has been identified and steps have already been taken to prevent a reoccurrence. The University Police Department, in conjunction with Housing and ITS, is investigating this incident.”
Considering this is the second time the school has been attacked, one can imagine this response wasn’t too reassuring to the students. The incident also shows that the traditional reactive approach to security needs to be replaced by a proactive, risk-based approach. After the first incident in 2010, the school stated that “The university shut down the compromised server and has taken other precautions to prevent future incidents.” One can only assume that the specific exploit on the specific server that was compromised was patched against, or maybe a specific service blocked on the firewall. Reacting to that specific threat and assuming that the remediation actions taken protected the school moving forward clearly was not the most comprehensive approach to protect against future threats.
The most successful organizations that combat risk today “have a much better handle controlling what is deployed on their networks and whether these assets are vulnerable to imminent threats” as Jon Oltsik noted earlier this month on his blog. He also pointed out though that only 20% of organizations today have a risk management plan in place that includes some form of threat intelligence. FireMon has always believed it is important to proactively identify areas of Risk, whether they come from adding a rule to your firewall that inadvertently introduces risk by being overly permissive, or by identifying in real-time what assets on your network are most vulnerable to exploitation. With the release of Security Manager 6.0 with Risk Analyzer add-on, organizations now have a complete Security Posture Management tool that provides unparalleled visibility to understand the scope of business vulnerability and prioritize the proactive defense of critical assets, while maintaining a high confidence that their security infrastructure is free of human error or incompatibilities between policies and protection. Avoid having to post a breach FAQ; adopt a proactive risk based approach to security management today.
Over at Dark Reading, John Sawyer wrote an interesting article about the need for threat intelligence within organizations in today’s threat landscape. He notes that “Being able to keep up with changing technology, emerging threats, and information overload that goes with managing thousands to tens of thousands systems requires proactive efforts on the part of security pros”. Sawyer also points out that simply relying on the security products that you already have in place to protect your organization is not enough. The author makes a key point that “To adequately address the threats against their organizations, enterprise security pros need to understand exactly what they’re trying to protect — a seemingly innocent but burdensome task that requires them to know their systems and networks inside and out”.
With this last point highlighted, Sawyer goes on to advocate that organizations need to start developing processes to mine both internal and external threat intelligence. He notes that all organizations have log data that they could be mining for insight. Those that are tight on cash could write scripts to mine logs “to produce reports about failed logins, port scans, top IDS events, and more”. He further advocates the use of SIEM technology for those organizations that can afford it. The author also notes the importance of gathering external intelligence around threats, whether doing so manually or by leveraging paid services which provide the information.
One point in particular that Sawyer highlights is as follows: “security teams are being forced into developing threat intelligence operations to react quickly and mitigate new vulnerabilities as they crop up”. We at FireMon absolutely agree, but also advocate that just simply reacting quickly isn’t enough in today’s evolving threat landscape. Organizations today need to operationalize risk into their everyday security operations, and proactively identify and remediate potential risk to their networks before an attacker even has the opportunity to exploit a vulnerability. That is why we introduced our Risk Analyzer product last year, and why we are excited to incorporate that technology in our new Security Manager 6.0 release, providing the industry’s first complete security posture management solution. We invite you to see how this security posture technology can bring proactive and automated risk intelligence to your everyday security operations.
In our first post on accurately measuring & scoring risk, we examined the holistic network approach many enterprises take around managing risk. This approach is to run vulnerability scanners against parts of their network or the network in its entirety at some predetermined interval. In both cases, scans are run, vulnerabilities are identified and possibly prioritized based on asset value, patching activities are scheduled over the next month or quarter, and the event repeats itself. As we noted, this approach over-simplifies the complex task of risk, as different threats and different assets define different risks.
The answer to this dynamic risk challenge is clear. Organizations need to operationalize risk into their daily security activities, and not make risk management simply a set event that occurs at predetermined intervals. As changes occur to the organizations risk posture based off of the business activities noted in our last post, or larger corporate events such as M&A or moving to the cloud, security organizations need to be able to dynamically and easily analyze this change to their risk posture in real time. To effectively do so, a tool that provides the ability to create different risk scenarios is required. Scenarios enable an organization to address each different threat to their assets as changes occur.
In the previous post, we provided the example of a business unit requesting VPN access to a new business partner after the predetermined scan had already been run. Leveraging a tool that provides the ability to create different risk scenarios, the security team would be able to create a new scenario to identify the new connectivity from the business partner into their network. To truly be effective, the tool would not only need to be able to identify this new connection, but have the contextual awareness of the firewall policy, network topology and any other network security devices that might be traversed between the front and back end systems involved in this new connectivity to accurately identify any potential vulnerabilities that are introduced from this new partnership.
FireMon Risk Analyzer is just that tool. Risk Analyzer enables administrators to create different scenarios: VPN connectivity to new business partners, connectivity to a cloud provider, a new data center coming online. Combined with Risk Analyzer’s full network topology and security policy awareness (which can be continually updated in real time via FireMon Security Manager), end users are able to identify new risk scenarios, proactively identify the new risk introduced from the scenario, and virtually apply remediation to ensure that the most effective remediation is completed with the least amount of effort. Multiple scenarios can be created as different threats or business events are identified, and as changes occur to the configuration or connectivity within the scenarios, end users can easily and immediately re-run the scenario within Risk Analyzer to asses how these changes affect the true risk posture of the organization. Risk Scenarios enable organizations to achieve the goal of operationalizing risk into their everyday activity.
As those of you who have followed this blog over the past couple of months know, we have been slowly revealing bits and pieces about our new Risk Analyzer product here at Firemon. Over the next week and in the coming months, you will see and hear a huge push around Risk from all areas of Firemon. The official release of Risk Analyzer is imminent, as our CEO noted in his twitter feed this morning. We have also highlighted our partnership with Juniper Networks around Risk Analyzer and JunOS Space. You can get even more insight into what we are doing together on Juniper’s YouTube channel.
Why are we suddenly so focused on Risk, and why is it something you should care about? At the end of the day, all of the security controls organizations have put in place, the firewalls, IDS/IPS’s, proxys, ACL’s, desktop firewalls, etc., are there to help reduce and eliminate the risk to your IT infrastructure. Risk is what we are trying to control and limit. However, as we have previously highlighted, analyzing risk in today’s networks is a huge challenge. We tend to rely on a single tool to determine risk, and in the complex network environments we live in today, these tools can present 1000′s of items that an organization needs to address. Attempting to manually review that list and prioritize the remediation results in organizations spending to little or too much time attempting to reduce their risk. Furthermore, those tools lack the full contextual awareness of your entire network topology and how data flows through the environment, which is a real key to accurately identifying the areas of your infrastructure that are most at risk.
Risk Analyzer provides that full context of network topology awareness that is so critical to accurate risk analysis. It automatically shows you what actions to take to reduce the greatest amount of risk with the least amount of effort, ensuring your valuable resources are spending the exact amount of time needed to effectively reduce risk to your infrastructure. It’s patented analysis engine that has been proven for the past 4 years in the largest DOD and Intelligence networks produces results in seconds as opposed to hours or even days that other solutions require. It graphically shows you where you are at risk from any part of your infrastructure. Risk Analyzer will help you automate the reduction of risk to your IT infrastructure.
This is why we are excited about Risk Analyzer and so focused on Risk. Risk, after all, is the key.
The amount of news generated around attacks in 2011 has been overwhelming. In just the last week, the reports around SCADA based attacks have reached almost histrionic levels. Attacks on NASA, AT&T & VCU have all been highlighted this month as well. Despite the fact that companies will spend over $8 billion dollars on network security this year, hackers continue to successfully breach networks with an alarming regularity.
In an article on APT’s posted on Dark Reading yesterday, Sean Brady from RSA had an interesting quote. He said “Identifying the entry point — where an attacker got into a company’s network — is a key aspect of identifying and responding to an advanced attack”. At Firemon, we couldn’t agree more. However, we would also ask why wait until you’ve been attacked to discover the entry point? Why not proactively find the entry point yourself? As clearly indicated by the attack coverage we’ve seen in the press this year, the attackers are actively looking to find the entry point into your network even as you read this post.
Firemon’s new Risk Analyzer technology is designed to proactively find the entry point into your network that can be exploited. Risk Analyzer will also identify where an attacker can pivot off that access point, and what other resources within your network can be compromised. Risk Analyzer will also prioritize what patched vulnerabilities can reduce the greatest amount of risk with the least amount of effort, helping to focus your organization’s remediation efforts. Don’t be the last to discover the entry points that are exposed in your network; he who finds the entry point first wins.
I read a quick blog post this morning from Rick Holland at Forrester. In fact, part of my title is borrowed from a line in his post. As security professionals, I think it is important to recognize that despite our best efforts, many of the network security controls that have been deployed have still failed to prevent breeches and attacks from occurring. Holland along with John Kindervag have published a new report called “Planning for Failure”. They note that this years headlines have not been encouraging for the security world, as evidenced yet again yesterday by the Steam website hack and the take down of Estonian hackers in Operation Ghost Click.
The deluge of news around breeches and incidents that have occurred this year should not cause us to throw our arms up and head for the exits. It should ultimately galvanize those of us in the security world to be more proactive about assessing the risk posture of our organizations, identifying the areas of weakness we have, and fixing them before an incident occurs. As Holland notes in his post “An ounce of preparation is worth a pound of remediation”. The full Planning for Failure report also stresses the importance of testing. We at Firemon could not agree more. Our new Risk Analyzer technology enables organizations to test their entire network topology, factoring in the network security controls that are in-place, and identify exactly where attackers could breach your network. Risk Analyzer will even highlight systems that are susceptible to client-side vulnerabilities that attackers could gain access to despite effective network security controls, and identifies where the attackers could further penetrate into the network by pivoting off these assets. Risk Analyzer’s patented analysis engine provides real-time analysis, and graphically shows you where in your topology you are vulnerable. Risk Analyzer also helps you to laser focus on what remediation steps will reduce the greatest amount of risk with the least amount of effort by providing a prioritized list of remediation actions, and allowing a user to virtually apply said patches, graphically showing the impact that remediation effort has on the networks risk posture.
We are excited to release Risk Analyzer this month, and believe it is the key part of a proactive testing process that all security organizations should implement as part of their overall Incident Management plan. Risk Analyzer will allow you to substantially reduce your risk posture, prioritize your remediation efforts, and to measure the effectiveness of the security controls you have put in place.