Yet another systems breach was reported last week, this time at the University of North Florida affecting 23,000+ students. This in and of itself is unfortunately nothing new, as we have been inundated weekly with reports of breeches occurring at organizations throughout the last 18 months. What struck a chord however with this incident at UNF is that it is not the first time that the college had experienced data loss from an external attacker. In October of 2010, the school was also attacked by an external hacker, and 107,000 students were affected in that incident. UNF has posted an FAQ on the latest attack here. One of the more interesting questions is what is the university doing to make sure this doesn’t happen again, with the school providing the following answer: “The method used by the intruder to gain access has been identified and steps have already been taken to prevent a reoccurrence. The University Police Department, in conjunction with Housing and ITS, is investigating this incident.”
Considering this is the second time the school has been attacked, one can imagine this response wasn’t too reassuring to the students. The incident also shows that the traditional reactive approach to security needs to be replaced by a proactive, risk-based approach. After the first incident in 2010, the school stated that “The university shut down the compromised server and has taken other precautions to prevent future incidents.” One can only assume that the specific exploit on the specific server that was compromised was patched against, or maybe a specific service blocked on the firewall. Reacting to that specific threat and assuming that the remediation actions taken protected the school moving forward clearly was not the most comprehensive approach to protect against future threats.
The most successful organizations that combat risk today “have a much better handle controlling what is deployed on their networks and whether these assets are vulnerable to imminent threats” as Jon Oltsik noted earlier this month on his blog. He also pointed out though that only 20% of organizations today have a risk management plan in place that includes some form of threat intelligence. FireMon has always believed it is important to proactively identify areas of Risk, whether they come from adding a rule to your firewall that inadvertently introduces risk by being overly permissive, or by identifying in real-time what assets on your network are most vulnerable to exploitation. With the release of Security Manager 6.0 with Risk Analyzer add-on, organizations now have a complete Security Posture Management tool that provides unparalleled visibility to understand the scope of business vulnerability and prioritize the proactive defense of critical assets, while maintaining a high confidence that their security infrastructure is free of human error or incompatibilities between policies and protection. Avoid having to post a breach FAQ; adopt a proactive risk based approach to security management today.
Over at Dark Reading, John Sawyer wrote an interesting article about the need for threat intelligence within organizations in today’s threat landscape. He notes that “Being able to keep up with changing technology, emerging threats, and information overload that goes with managing thousands to tens of thousands systems requires proactive efforts on the part of security pros”. Sawyer also points out that simply relying on the security products that you already have in place to protect your organization is not enough. The author makes a key point that “To adequately address the threats against their organizations, enterprise security pros need to understand exactly what they’re trying to protect — a seemingly innocent but burdensome task that requires them to know their systems and networks inside and out”.
With this last point highlighted, Sawyer goes on to advocate that organizations need to start developing processes to mine both internal and external threat intelligence. He notes that all organizations have log data that they could be mining for insight. Those that are tight on cash could write scripts to mine logs “to produce reports about failed logins, port scans, top IDS events, and more”. He further advocates the use of SIEM technology for those organizations that can afford it. The author also notes the importance of gathering external intelligence around threats, whether doing so manually or by leveraging paid services which provide the information.
One point in particular that Sawyer highlights is as follows: “security teams are being forced into developing threat intelligence operations to react quickly and mitigate new vulnerabilities as they crop up”. We at FireMon absolutely agree, but also advocate that just simply reacting quickly isn’t enough in today’s evolving threat landscape. Organizations today need to operationalize risk into their everyday security operations, and proactively identify and remediate potential risk to their networks before an attacker even has the opportunity to exploit a vulnerability. That is why we introduced our Risk Analyzer product last year, and why we are excited to incorporate that technology in our new Security Manager 6.0 release, providing the industry’s first complete security posture management solution. We invite you to see how this security posture technology can bring proactive and automated risk intelligence to your everyday security operations.
Roger Grimes and I have engaged in a very interesting conversation around the necessity and value of firewalls. Yesterday I took issue in my blog post with Roger’s initial claim that the firewall is dead. In response, Roger continues his argument in his post, The Firestorm over Firewalls.
Roger seems to have capitulated the argument on ineffective management and instead doubled down on two core points:
- 99% of all attacks are client-side initiated and the firewall is ineffective at protecting against these attacks
- The fact that the industry is not more secure is proof that the firewall is worthless
I still take significant issue with the argument that 99% of all attacks are client-side and Roger’s “proof” that anti-virus vendors block a lot of stuff is not compelling to me. Remember firewalls “block” a lot of stuff too with billions of logs of dropped traffic generated every second world-wide. Neither of these points is sufficient to make or dispel the 99% claim. The Verizon Data Breach Investigations Report I referenced is also not perfect, as Roger points out, as it only covers a minority of all attacks worldwide. But it is the best source I am aware of, so I think it is still worth referencing. And pointing to a sample graphic ( on page 8 ) meant to describe a documentation standard as “proof” that client-side attacks are responsible for all breaches is not very compelling either. Especially since it was not written to support this point in any way. However, even if we do acknowledge Roger’s “proof” graphic on page 8, take a look a the paragraph describing it that claims an egress filter (firewall) could have prevented the breach and it seems to dispel Roger’s obituary of the firewall.
But let’s set statistics aside. I imagine there are plenty of other people who can more credibly respond to Roger’s unsubstantiated claim about 99% of attacks are client-side. And, I don’t mean to argue that client-side attacks are not an issue, I simply mean to claim they are not the only issue.
Instead, I would like to hypothetically accept Roger’s postion that 99% of all successful attacks are client-side. I would argue this change in attack vectors through the years strengthens the case that a well-configured firewall is an effective security control. It is a matter of attackers coming in through the open window instead of the closed door. The growth in client-side attacks suggest the direct attack is being successfully thwarted by the firewall and less effective solutions are being exploited.
The great thing about a firewall is that it employs a positive security model where only what you decide to allow is permitted and everything else is denied. When managed well, it makes it a great security solution. In contrast, malware detection and anti-virus software employ a negative security model where everything is allowed and only known bad attacks are denied. This creates a horrible cat and mouse game that the attackers seem adept at winning by staying a step ahead of the latest signatures. Which begs the question, if Roger’s argument is that client-side attacks are the real problem and the fact that we still have security problems is justification to “kill” a technology, why does he pick on the firewall. Shouldn’t he instead have called anti-virus or anti-malware or some other client-side technology dead instead? The firewall, according the Roger’s own logic, is the one technology in the game that is working.
The firewall isn’t dead. In fact, I think Roger’s arguments strengthen the case the firewalls are working. Are they perfect, no. Are they sufficient to solve all the security problems, no. Should we get rid of them because they are not perfect, NO!
And this gets to the heart of the matter: the fact that there remain security issues in information technology is not a matter of one technology working or not. It is not justification to call an effective technology “dead” because it doesn’t solve everything. When effectively managed, the firewall is a very effective security solution. Additional capabilities in NG Firewall technology continue to make it a relevant and central part of a security solution. It should not be considered THE solution, but it certainly shouldn’t be discounted either.
Today Roger Grimes posted an article on InfoWorld about the overdue death of the firewall: Why you don’t need a firewall. His case rests on two primary arguments: 1. The firewall doesn’t protect against modern day threats, specifically client-side vulnerabilities and the fact that all apps run over port 80 and 443 that can never be blocked in the firewall and 2. The firewall is managed so poorly that it causes more problems than it solves.
Let’s separate these two points to more logically discuss each, starting with the value of a firewall in today’s threat environment. I take significant issue with his statement that, “Today, 99 percent of all successful attacks are client-side attacks”. This is not substantiated by any research for good reason; it isn’t true. The Verizon Data Breach Investigations Report actually discusses successful attacks in significant depth and completely invalidates this point. It reports that 81% of all attacks and 99% of lost data is a direct result of “Hacking”. It goes on to specify that access to remote services (e.g. VNC, RCP) “combined with default, weak or stolen credentials” account for 88% of all breaches. The assumption that 99% of attacks are client-side is dead wrong.
With remote access to services remaining the greatest attack vector today, firewalls still play a very significant role and are changing dramatically. It would also seem that Roger is ignoring new advancements in firewall technology. NextGen firewalls are specifically adept at helping prevent the client-side attack. No longer is port 80 and 443 an open highway of access through which everything can pass. User-based and application-based policies permit effective control of outbound access.
Roger’s second point, on ineffective management, is something which I agree is a problem, but don’t agree with his conclusion. His argument that ineffective management, where rules are created that permit nearly all access renders the firewall useless, is absolutely correct. Ineffective management that leads to poor configurations is a problem that can turn the best firewall technology into nothing more than a router passing all traffic. But his conclusion that this means the firewall should die is a really bad leap in logic. Poor management is not cause to kill the technology. Instead, I propose more effective management.
FireMon has been dedicated to this very idea of better firewall management for over a decade. Ineffective firewalls are not a caused by bad technology or incapable administrators. It is a problem with management. A stream of 1,000 logs per second won’t make any sense if a human tries to process their meaning while staring at a screen, but with some automation of log analysis, they can provide a wealth of information. 500 complex rules in a single firewall policy may be nearly impossible to evaluate to understand what access is truly being allowed, but with a powerful policy analysis tool, it is a trivial exercise. Even Roger’s example of a poorly defined rule with “ANY ANY” defined due to missing requirements is a solvable problem with the right tools. FireMon provides a powerful Traffic Flow Analysis tool that analyzes traffic flowing through overly-permissive rules permitting retroactive correction of these problematic rules.
The firewall is not dead and won’t be. With next gen capabilities and effective management – which is possible and available today – the firewall will remain a critical component of security solutions forever.
IBM just published their annual Chief Information Security Officer Assessment. There were many interesting insights highlighted within the report. One striking point noted was that external threats are viewed by the majority of CISO respondents as the primary security challenge they face. Traditionally within Information Security, internal threats have always been touted as the greatest threat a security group should focus on. However, as IBM’s report notes, the increased media attention over the past 2 years around external threats and high profile breaches combined with both the customer and business units increased expectations around information protection have shifted the focus towards the external threat.
With this increased focus around the external threat, the CISO respondents also noted that their focus is shifting towards risk management. Moving forward, the majority of CISO’s “expect to be spending more of their time on reduction of potential future risk, and less on mitigation of current threats and management of regulatory and compliance issues.” John Meakin, the Global Head of Security Solutions & Architecture at Deutsche Bank, noted that “Given the dynamic nature of the challenge, measuring the state of security within an organization is increasingly important. Since threats are always moving and solutions are more complex, dynamic and often partial, knowing where you are is essential.” He concluded by adding that a key metric security organizations should focus on is “the speed and completeness of correcting known vulnerabilities.”
FireMon’s Risk Analyzer combined with Security Manager provides an automated tool that enables security organizations to identify not only the potential future risk, but to identify exactly what assets are vulnerable to attack. Risk Analyzer will also prioritize what actions will reduce the greatest amount of risk with the least amount of effort. This enables CISO’s and their security organizations to track the speed and completeness of correcting known vulnerabilities, and to measure over time how they are improving their overall risk posture on the network. IBM’s report shows that CISO’s are looking for ways that they can proactively reduce and manage risk. Risk Analyzer is the tool that enables CISO’s to operationalize risk into their everyday activities, and reduce their exposure to risk automatically and in real time.
SANS recently published their Analyst Program survey on log and event management. Report author Jerry Shank noted many interesting facts within the paper. Specifically, he highlighted that “The data suggests that respondents are having difficulty separating normal traffic from suspicious traffic,” and that security practitioners “need advanced correlation and analysis capabilities to shut out the noise and get the actionable information they need.” Despite the ever evolving threat landscape, as noted in the latest Symantec Threat report, there was another telling statement within SANS report: “A large percentage of organizations—22 percent of the respondents —say they have little or no automation and no plans to change. The most common reasons given for not automating include lack of time and money… resources that are closely intertwined.”
Log Analysis is certainly a key component within any organizations security practice. Maintaining logs can help with the forensic analysis when reviewing a breach, and can help to identify baselines to hopefully note when anomalies are occurring. The statistics around actual time spent analyzing logs for attacks was extremely telling though. When the IT professionals were asked how much time they normally spend on log-data analysis, the largest group (35%) replied, “none to a few hours per week.” As for the rest, 18% didn’t know, 11% said one day per week, 2% outsourced this task to a managed security service provider, and 24% defined it as “integrated into normal workflow.” The SANS survey report, which notes analysis time overall actually seems down from last year, noted that about 50% of the smaller organizations spent zero to just a few hours analyzing logs.
These statistics show that log analysis is a difficult and time consuming process that even the largest organizations are struggling to integrate into the everyday operations of security, much less smaller organizations with limited security staffs. That is why we at FireMon believe it is vital to augment SIEM products with a tool that can operationalize the identification of risk to the network in real-time. The tool should automate the identification process of assets that can be compromised, and be simple and easy to deploy for any organization regardless of size. Risk Analyzer is just such a product. It automates the identification of assets at risk in your network, and provides a prioritized list of actions that will reduce the greatest amount of risk with the least amount of effort. This tool is valuable in any organization of any size. As the SANS report notes in its conclusion, “the issue has been getting usable and actionable information out of the data when they need it for detection and response.” Risk Analyzer does exactly that; it provides actionable information that will reduce the risk to your network.
Symantec recently published their 2011 Threat report. I always find this an interesting read and a worthwhile read. But I have a pet peeve with a common mis-classification of vulnerabilities as threats. These two items are distinct, but related items: threats will often exploit vulnerabilities to achieve their goal. While related, vulnerabilities are not threats and mixing the two confuses the conversation.
Symantec is not oblivious to this fact as they carefully keep this distinction in mind when writing this report. In sections referring to vulnerabilities, they do not mix the term vulnerability and threat. But the report title is “Internet Security Threat Report”. Using the number of vulnerabilities discovered in 2011 as a metric to indicate the Threat trend is not appropriate.
There is still a place to discuss vulnerabilities in this report; in particular what types of vulnerabilities are being targeted by threats is a very interesting analysis I would like to see more of. But remove the vulnerability count off the headline graphics and don’t use it as a measure of threat. Vulnerabilities are not threats and they are not risk. Vulnerabilities are weaknesses potentially exploited by threats.
I was watching a video from Cloud Passage earlier today about their new Beta for Windows Firewall management: Halo for Windows. I don’t mean to take anything away from their work and I think it is a good new offering. But something jumped out at me near the end of the video that the administrator in the video only chose to log “drops”. Why just the dropped traffic?
I hear this fairly frequently from people that choose to only log drop traffic, since it represent the “bad” traffic and they can send these logs to their SIEM to get alerts on these dropped connections. Particularly when performance of logging is a concern and administrators want to reduce the performance impact by reducing their logging, they will turn logging off on highly utilized rules where they *know* what traffic is flowing through those rules. But, they continue to log ALL their dropped traffic. This is completely wrong.
Logging dropped packets does two positive things for you:
- It allows you to verify your technology is actually working (confirming that the millions of dollars you spent of your firewall is actually doing something)
- Identify attacks that failed
I don’t dismiss there is some value in #2, to build up a repository of threats. And, it can aid in discovering malware inside your network and a few other good uses. For this reason, I still strongly encourage logging many drop rules. But remember, this traffic FAILED. The preventative technology (firewall, IPS, etc) succeeded. As for the first case, if you don’t trust the technology, don’t buy it. And certainly don’t use this count like a scoreboard of security success. The fact that you successfully blocked traffic is not proof of security…no matter how many things you drop. This is not a security success metric!
Instead, if you care about security, you should be logging your accepts. This is the traffic that can represent an actual risk to your organization. This is the traffic that successfully passes through your security defenses. There is a ton of value in this data:
- Forensics review after a breach is discovered to learn when it started and how long it lasted
- Threat alerts when known bad actors are SUCCEEDING in accessing resources in your organization
- Anomaly detection when there is an unexpected spike (or drop) in typical traffic behavior
This attitude to log all dropped traffic has been promoted by just about everyone. Starting with the firewall and IDS vendors, who want to show “value” by logging dropped traffic (“look, see, I dropped another attack!”). And it is promoted by standards that say almost nothing about what a firewall policy should or should not do, but will nearly always include a recommendation to “include a clean up rule and LOG it”. I don’t disagree with logging cleanup rules. But this is not nearly as important as logging successful access. In the case of the drop, you already succeeding in thwarting the attack, the log is of little additional value. In the case of an accept, it is worthy of some additional scrutiny.
My suggestion…log all accepted traffic and reassess which drop rules you want to log.
[NOTE: in the Halo example above, since it is a host-based firewall, there can be limited value in logging the http accepts to the local web server since the web server should be logging connections as well. This video just happened to get me thinking about this topic this morning.]
Many in the security field have been following the story of the Global Payments breach this week. First reported by Brian Krebs on his award winning security blog, he has continued to follow the story as more details have been uncovered day by day. As many outlets reported, due to the breach, Visa removed Global Payments from its list of preferred vendors. Global Payments can still process transactions, but at a significantly higher fee. The company’s stock dropped 9% the day of the breach before trading was halted, and has continued to drop after trading resumed on Monday. It is also expected that Global Payments will have to dip into its cash reserves of $300-400 million to cover the loss associated with the breach.
The negative financial blows to Global Payments noted above highlight the significant impact a security breach can have on a company today. Gone are the days when security vendors warned of the potential impact a nefarious hacker might have on your network, hoping to play the fear card in order to gain a sale. The threats from multinational criminal and state sponsored hacker groups is now very real, and these threats can inflict significant financial and public relations damage to your organization. With the spate of attacks and breaches that have been covered in the last year, security is finally starting to be a topic focused on in the executive suite, with many leaders struggling to determine how to communicate the state of security effectively.
Global Payments issued a statement on the breach, which included the following statement from their CEO: “It is reassuring that our security processes detected an intrusion.” However, in Krebs latest update to the story, he notes that the New York Times reported that Global Payments was breached in early 2011. One of Krebs hacker sources also shared similar information, saying “the company’s <Global Payments> network was under full criminal control from that time until March 26, 2012.” Global Payments stock has been negatively affected, their fees to do business with Visa have significantly increased, and they have a large payout from their cash reserves looming to both Visa and MasterCard to cover the card holder losses because of this breach . In light of those facts, it is surprising to hear their CEO is reassured they discovered the intrusion after the fact.
Breaches like Global Payments, as well as the numerous events that were highlighted in 2011, show that the reactionary approach that has been taken within the security world is not adequate to protect companies from the negative financial impacts a breach can inflict. Companies need to operationalize risk within their day to day security activities, and reduce the danger to their networks by making threats and vulnerabilities visible and actionable. This enables organizations to prioritize and address high-risk security vulnerabilities before breaches occur. FireMon’s Risk Analyzer, now integrated into Security Manager with the 6.0 release, automates the identification of what assets are vulnerable within a network, and prioritizes what actions will reduce the greatest amount of risk with the least amount of effort. Risk Analyzer moves security from a reactionary exercise to a proactive approach that allows you to fix your vulnerable assets before they can be exploited. As this latest breach exposes, not operationalizing risk within your security organization can be a costly decision.
Well, another week at the RSA Conference is in the books. I must say that this was the best conference that I’ve been to in many years. I was thrilled to see the security industry back and stronger than ever. After a few slow years, the conference was packed and there was excitement in the air (our friends Alan Shimel (here) and Mike Rothman (here) agree).
Of course, we saw the mega-trends (cloud, virtualization, big data) in full force. But I was struck by how strong the firewall segment of the industry continued to be. It was good to see our friends at Juniper, Check Point, McAfee and Fortinet be so well represented with big booths and even bigger attendance. Next-generation firewalls continued to have a lot of buzz around them, led by our partners at Palo Alto Networks, and it was exciting to get a closer look at the newest entry into the enterprise firewall market, a datacenter firewall from our long-time friends at F5 Networks.
What I took away from the conversations that I had, including leading a panel discussion on the state of firewalls to a packed house of 600 (more here), was that firewalls continue to have an important place in the network. And I say that for a very practical reason, because I realize the we could secure every host on the network individually. But the explosion of computing power that has led to incredibly dynamic, ever-expanding virtual datacenters has further solidified for me that we need a common place to enforce our access controls – and the network is the right place to do that. Now, how we enforce controls will change (purpose-built firewalls are quickly becoming a reality), and you should choose the right tool for the job given the particular problem you face. But there is still a great economy of scale to controlling a few ingress/egress points instead of managing a policy on every host.
The other theme that I heard from the folks who stopped by our booth was that they were overwhelmed by the vulnerabilities on their networks. One gentlemen confided in me that he had 85,000 hosts on the network and even more vulnerabilities than that. I showed him our new Risk Analyzer product, and how it could map those vulnerabilities in context of the network security protections he already had in place and measure the true risk of exposure from his threat sources. My message to him and others was simple: stop managing vulnerabilities and start managing risk.
That’s the vision behind our Risk Analyzer product. Risk Analyzer is a proactive, complete network attack simulation and risk measurement solution allowing you to assess the security of your most valuable assets. Ready to change your perspective on network security? Learn more about Risk Analyzer at http://www.firemon.com/riskanalyzer.