If you left your car unlocked with valuables visible in the front seat, would you blame the car manufacturer if someone stole those items? I doubt you would and I seriously doubt anyone would listen if you tried. But a recent US Federal Court of Appeals ruling in the case of Patco Construction. v. People’s United Bank might indicate that yes the car manufacturer, or in this case a bank, is liable.
The case revolves around the plaintiff whose credentials and account information were compromised. The cyber thieves were then able to login to the bank’s site and initiate several transfers totaling more than half a million dollars. When the fraud was reported the bank was able to recover about half of the stolen funds, but they refused to refund Patco the rest of their stolen money.
In this case the bank’s position was that Patco was negligent enough to have their passwords, account info and usernames stolen. Why should the bank bear the cost of Patco’s mistake? The bank said that Patco should be responsible for its own losses. It is hard to argue against the Bank’s position. Why should they bear the loss, when they had nothing to do with Patco exposing their credentials?
But Patco is understandably upset given the size of the loss and they have a valid concern that the Bank’s own internal systems flagged the transactions as suspect and yet didn’t stop them. Patco’s argument is that regardless of why or how the fraud was initiated, the bank must provide commercially reasonable controls to prevent fraud. The court was persuaded by this argument.
For now, the case has been sent back down for adjudication and suggested to both parties that they try to settle this before a verdict. But the case itself brings up a bigger issue:
When does an organization or individual have to take responsibility for its actions on security? If the bank were held to be liable in the Patco case what message does that send? One could say that the message is don’t worry too much about your online banking credentials because at the end of the day if anything bad happens the bank is liable anyway. I don’t think that is the message we should be sending. How can we expect banks to take this exposure on without figuring that risk into the fee equation?
No doubt there are many instances of negligence and poor security where consumers should hold the failing institution liable for loss of money or information. But there must be some shared responsibility of security.
Roger Grimes and I have engaged in a very interesting conversation around the necessity and value of firewalls. Yesterday I took issue in my blog post with Roger’s initial claim that the firewall is dead. In response, Roger continues his argument in his post, The Firestorm over Firewalls.
Roger seems to have capitulated the argument on ineffective management and instead doubled down on two core points:
- 99% of all attacks are client-side initiated and the firewall is ineffective at protecting against these attacks
- The fact that the industry is not more secure is proof that the firewall is worthless
I still take significant issue with the argument that 99% of all attacks are client-side and Roger’s “proof” that anti-virus vendors block a lot of stuff is not compelling to me. Remember firewalls “block” a lot of stuff too with billions of logs of dropped traffic generated every second world-wide. Neither of these points is sufficient to make or dispel the 99% claim. The Verizon Data Breach Investigations Report I referenced is also not perfect, as Roger points out, as it only covers a minority of all attacks worldwide. But it is the best source I am aware of, so I think it is still worth referencing. And pointing to a sample graphic ( on page 8 ) meant to describe a documentation standard as “proof” that client-side attacks are responsible for all breaches is not very compelling either. Especially since it was not written to support this point in any way. However, even if we do acknowledge Roger’s “proof” graphic on page 8, take a look a the paragraph describing it that claims an egress filter (firewall) could have prevented the breach and it seems to dispel Roger’s obituary of the firewall.
But let’s set statistics aside. I imagine there are plenty of other people who can more credibly respond to Roger’s unsubstantiated claim about 99% of attacks are client-side. And, I don’t mean to argue that client-side attacks are not an issue, I simply mean to claim they are not the only issue.
Instead, I would like to hypothetically accept Roger’s postion that 99% of all successful attacks are client-side. I would argue this change in attack vectors through the years strengthens the case that a well-configured firewall is an effective security control. It is a matter of attackers coming in through the open window instead of the closed door. The growth in client-side attacks suggest the direct attack is being successfully thwarted by the firewall and less effective solutions are being exploited.
The great thing about a firewall is that it employs a positive security model where only what you decide to allow is permitted and everything else is denied. When managed well, it makes it a great security solution. In contrast, malware detection and anti-virus software employ a negative security model where everything is allowed and only known bad attacks are denied. This creates a horrible cat and mouse game that the attackers seem adept at winning by staying a step ahead of the latest signatures. Which begs the question, if Roger’s argument is that client-side attacks are the real problem and the fact that we still have security problems is justification to “kill” a technology, why does he pick on the firewall. Shouldn’t he instead have called anti-virus or anti-malware or some other client-side technology dead instead? The firewall, according the Roger’s own logic, is the one technology in the game that is working.
The firewall isn’t dead. In fact, I think Roger’s arguments strengthen the case the firewalls are working. Are they perfect, no. Are they sufficient to solve all the security problems, no. Should we get rid of them because they are not perfect, NO!
And this gets to the heart of the matter: the fact that there remain security issues in information technology is not a matter of one technology working or not. It is not justification to call an effective technology “dead” because it doesn’t solve everything. When effectively managed, the firewall is a very effective security solution. Additional capabilities in NG Firewall technology continue to make it a relevant and central part of a security solution. It should not be considered THE solution, but it certainly shouldn’t be discounted either.
Today Roger Grimes posted an article on InfoWorld about the overdue death of the firewall: Why you don’t need a firewall. His case rests on two primary arguments: 1. The firewall doesn’t protect against modern day threats, specifically client-side vulnerabilities and the fact that all apps run over port 80 and 443 that can never be blocked in the firewall and 2. The firewall is managed so poorly that it causes more problems than it solves.
Let’s separate these two points to more logically discuss each, starting with the value of a firewall in today’s threat environment. I take significant issue with his statement that, “Today, 99 percent of all successful attacks are client-side attacks”. This is not substantiated by any research for good reason; it isn’t true. The Verizon Data Breach Investigations Report actually discusses successful attacks in significant depth and completely invalidates this point. It reports that 81% of all attacks and 99% of lost data is a direct result of “Hacking”. It goes on to specify that access to remote services (e.g. VNC, RCP) “combined with default, weak or stolen credentials” account for 88% of all breaches. The assumption that 99% of attacks are client-side is dead wrong.
With remote access to services remaining the greatest attack vector today, firewalls still play a very significant role and are changing dramatically. It would also seem that Roger is ignoring new advancements in firewall technology. NextGen firewalls are specifically adept at helping prevent the client-side attack. No longer is port 80 and 443 an open highway of access through which everything can pass. User-based and application-based policies permit effective control of outbound access.
Roger’s second point, on ineffective management, is something which I agree is a problem, but don’t agree with his conclusion. His argument that ineffective management, where rules are created that permit nearly all access renders the firewall useless, is absolutely correct. Ineffective management that leads to poor configurations is a problem that can turn the best firewall technology into nothing more than a router passing all traffic. But his conclusion that this means the firewall should die is a really bad leap in logic. Poor management is not cause to kill the technology. Instead, I propose more effective management.
FireMon has been dedicated to this very idea of better firewall management for over a decade. Ineffective firewalls are not a caused by bad technology or incapable administrators. It is a problem with management. A stream of 1,000 logs per second won’t make any sense if a human tries to process their meaning while staring at a screen, but with some automation of log analysis, they can provide a wealth of information. 500 complex rules in a single firewall policy may be nearly impossible to evaluate to understand what access is truly being allowed, but with a powerful policy analysis tool, it is a trivial exercise. Even Roger’s example of a poorly defined rule with “ANY ANY” defined due to missing requirements is a solvable problem with the right tools. FireMon provides a powerful Traffic Flow Analysis tool that analyzes traffic flowing through overly-permissive rules permitting retroactive correction of these problematic rules.
The firewall is not dead and won’t be. With next gen capabilities and effective management – which is possible and available today – the firewall will remain a critical component of security solutions forever.
Symantec recently published their 2011 Threat report. I always find this an interesting read and a worthwhile read. But I have a pet peeve with a common mis-classification of vulnerabilities as threats. These two items are distinct, but related items: threats will often exploit vulnerabilities to achieve their goal. While related, vulnerabilities are not threats and mixing the two confuses the conversation.
Symantec is not oblivious to this fact as they carefully keep this distinction in mind when writing this report. In sections referring to vulnerabilities, they do not mix the term vulnerability and threat. But the report title is “Internet Security Threat Report”. Using the number of vulnerabilities discovered in 2011 as a metric to indicate the Threat trend is not appropriate.
There is still a place to discuss vulnerabilities in this report; in particular what types of vulnerabilities are being targeted by threats is a very interesting analysis I would like to see more of. But remove the vulnerability count off the headline graphics and don’t use it as a measure of threat. Vulnerabilities are not threats and they are not risk. Vulnerabilities are weaknesses potentially exploited by threats.
I was watching a video from Cloud Passage earlier today about their new Beta for Windows Firewall management: Halo for Windows. I don’t mean to take anything away from their work and I think it is a good new offering. But something jumped out at me near the end of the video that the administrator in the video only chose to log “drops”. Why just the dropped traffic?
I hear this fairly frequently from people that choose to only log drop traffic, since it represent the “bad” traffic and they can send these logs to their SIEM to get alerts on these dropped connections. Particularly when performance of logging is a concern and administrators want to reduce the performance impact by reducing their logging, they will turn logging off on highly utilized rules where they *know* what traffic is flowing through those rules. But, they continue to log ALL their dropped traffic. This is completely wrong.
Logging dropped packets does two positive things for you:
- It allows you to verify your technology is actually working (confirming that the millions of dollars you spent of your firewall is actually doing something)
- Identify attacks that failed
I don’t dismiss there is some value in #2, to build up a repository of threats. And, it can aid in discovering malware inside your network and a few other good uses. For this reason, I still strongly encourage logging many drop rules. But remember, this traffic FAILED. The preventative technology (firewall, IPS, etc) succeeded. As for the first case, if you don’t trust the technology, don’t buy it. And certainly don’t use this count like a scoreboard of security success. The fact that you successfully blocked traffic is not proof of security…no matter how many things you drop. This is not a security success metric!
Instead, if you care about security, you should be logging your accepts. This is the traffic that can represent an actual risk to your organization. This is the traffic that successfully passes through your security defenses. There is a ton of value in this data:
- Forensics review after a breach is discovered to learn when it started and how long it lasted
- Threat alerts when known bad actors are SUCCEEDING in accessing resources in your organization
- Anomaly detection when there is an unexpected spike (or drop) in typical traffic behavior
This attitude to log all dropped traffic has been promoted by just about everyone. Starting with the firewall and IDS vendors, who want to show “value” by logging dropped traffic (“look, see, I dropped another attack!”). And it is promoted by standards that say almost nothing about what a firewall policy should or should not do, but will nearly always include a recommendation to “include a clean up rule and LOG it”. I don’t disagree with logging cleanup rules. But this is not nearly as important as logging successful access. In the case of the drop, you already succeeding in thwarting the attack, the log is of little additional value. In the case of an accept, it is worthy of some additional scrutiny.
My suggestion…log all accepted traffic and reassess which drop rules you want to log.
[NOTE: in the Halo example above, since it is a host-based firewall, there can be limited value in logging the http accepts to the local web server since the web server should be logging connections as well. This video just happened to get me thinking about this topic this morning.]
Well, another week at the RSA Conference is in the books. I must say that this was the best conference that I’ve been to in many years. I was thrilled to see the security industry back and stronger than ever. After a few slow years, the conference was packed and there was excitement in the air (our friends Alan Shimel (here) and Mike Rothman (here) agree).
Of course, we saw the mega-trends (cloud, virtualization, big data) in full force. But I was struck by how strong the firewall segment of the industry continued to be. It was good to see our friends at Juniper, Check Point, McAfee and Fortinet be so well represented with big booths and even bigger attendance. Next-generation firewalls continued to have a lot of buzz around them, led by our partners at Palo Alto Networks, and it was exciting to get a closer look at the newest entry into the enterprise firewall market, a datacenter firewall from our long-time friends at F5 Networks.
What I took away from the conversations that I had, including leading a panel discussion on the state of firewalls to a packed house of 600 (more here), was that firewalls continue to have an important place in the network. And I say that for a very practical reason, because I realize the we could secure every host on the network individually. But the explosion of computing power that has led to incredibly dynamic, ever-expanding virtual datacenters has further solidified for me that we need a common place to enforce our access controls – and the network is the right place to do that. Now, how we enforce controls will change (purpose-built firewalls are quickly becoming a reality), and you should choose the right tool for the job given the particular problem you face. But there is still a great economy of scale to controlling a few ingress/egress points instead of managing a policy on every host.
The other theme that I heard from the folks who stopped by our booth was that they were overwhelmed by the vulnerabilities on their networks. One gentlemen confided in me that he had 85,000 hosts on the network and even more vulnerabilities than that. I showed him our new Risk Analyzer product, and how it could map those vulnerabilities in context of the network security protections he already had in place and measure the true risk of exposure from his threat sources. My message to him and others was simple: stop managing vulnerabilities and start managing risk.
That’s the vision behind our Risk Analyzer product. Risk Analyzer is a proactive, complete network attack simulation and risk measurement solution allowing you to assess the security of your most valuable assets. Ready to change your perspective on network security? Learn more about Risk Analyzer at http://www.firemon.com/riskanalyzer.
In a recent article on NETASQ, Richard Stiennon provides a “brief history of firewalls and the rise of UTM”. I have a couple problems with his conclusions. However, Richard does a good job describing how the firewall market has evolved over the years and gives his analysis of where it is heading. He has had a ringside seat to this history and provides a good 15+ year history in 3 paragraphs bringing us up to today with the recent appearance of Next Generation Firewalls.
My first minor disagreement with Richard’s view and definition of a UTM. I do agree the “bad reputation” UTM received in the early days was well received. In my view the UTM market was created by a class of firewalls attempting to disrupt the established firewall vendors by throwing more features in the same box. These solutions lacked effective management and did not scale to enterprise needs. Because of this, UTM has become synonymous with “SMB firewall” in my view. For this reason, I don’t consider a Check Point firewall with an IPS “blade” a UTM any more than I considered the Check Point firewall with VPN functionality in 1998 a UTM. And I certainly don’t consider a Palo Alto Networks firewall a UTM. The advancement of the NG firewalls was a new way to manage access (users and applications) not another commodity security product consolidated (crammed) onto the same box.
But that critique is pretty petty as it is just a name. Richard’s basic history that the firewalls of today do more than the firewalls of yesterday is true. And it is part of the reason that the demand for FireMon and firewall management solutions in general continues to increase. New security functionality does not necessarily translate into better security. It must be effectively managed.
Here is my major critique: the history of consolidating security functionality into the firewall is not necessarily the path of firewall innovation in the years ahead. The market drivers of data center consolidation, virtualization and cloud computing are changing the role of the firewall. But, unlike Richard, I don’t think this necessarily means stuffing more into the firewall. In fact, it may mean just the opposite: purpose-built firewalls for purpose-demanding situations.
Take for example, the web-hosting DMZ infrastructure. We wrote about this not long ago here. A general-purpose firewall only controlling http access between users and web servers is not doing much but slowing down access and barely tapping the capability of the firewall. However, a firewall with specific knowledge of web access embedded in a load balancer could be very interesting in this scenario (see F5′s recent announcement).
And virtualization is another fast-moving market that will stretch the bounds of firewalls. While embedding switches in firewalls has been around for a while in UTM devices and will remain a key feature of these SMB devices, it is much more likely for security to get integrated into switches in the enterprise. Last week, Nicira made public their recent work to manage virtual switches (great read: http://nicira.com/en/platform-for-innovation). One advantage of creating a software abstraction layer above the physical wires is that it allow “rules” to move with the virtualized network port. And they are certainly not alone. Juniper has similar visions and commented on Nicira’s work here. And Cisco has similar visions with their Nexus 1000v.
But this dynamic network of controlling access per port (and a port that moves around in the network as VM’s move) is not asking for some new type of security similar to a cramming a new feature into a UTM. The vision of the dynamic network is demanding dynamic management of security technology we already understand: control access based on application, user, port, protocol and networks. It is also demanding high-performance, not frequently associated with bloated UTM features.
In all cases, complexity is increasing. Whether it is the increase in features being added to firewalls or the demand to control access at a more granular level, the complexity of the technology is increasing. And this increase in complexity will demand new management solutions. As great as the new technology is, unless it is properly managed, it won’t provided the intended security.
FireMon announced today, what we here in the company have known for a while now, 2011 was a spectacular year for us at FireMon and we are so proud that we want you to know. Some of the highlights of 2011 for us:
- 50% year-over-year sales growth in firewall management solutions to enterprises while continuing to achieve profitable business operations for the year.
- Dramatic new customer growth with over 80 new Fortune 1000, government, healthcare, financial services and service provider customers including enterprises like Accor, American Automobile Association, and EarthLink and MSSPs like GBprotect.
- Acquired Saperix Technologies and patented risk analysis software developed at MIT Lincoln Laboratory that quantifies risk by identifying both critical threats and the most effective countermeasures.
- Expanded international operations in EMEA and Asia-Pacific with new executives and technical resources in the United Kingdom, France, Germany and Australia.
- Increased focus on business and technology partnerships with the addition of proven security industry channel management and business development executives.
The best part of all of this great news though is that we think this really positions us to make 2012 an even better year. We are poised to really take off as a result of additions to both our team and product line up.
As the people who virtually invented the firewall management space, we are very excited to see firewalls, through the introduction of “next gen firewalls”, become hot again. Security device management and scenario based risk management are two of the most important issues facing organizations. We are uniquely situated to offer solutions and answers to these issues.
In the meantime congratulations to everyone on the FireMon team for a job well done. But most of all a huge thank you to our customers and partners, for without you all none of this would be possible. Thanks to all of you and here is to a great 2012!
Adam Ely wrote a nice article on Dark Reading (“Tech Insight: What to Do When Your Business Partner Is Breached”) about how to respond when you become aware of a breach of a business partner. He discusses a very broad array of activities and responses you should consider immediately, on-going and post a breach.
One thing that jumped out at me was the brief mention of understanding your organization’s exposure. Adam wrote,
“As you’re starting to piece together what occurred, it’s time to understand your organization’s exposure. You’ll need to fully understand what service the partner provides to your organization, the data it possesses, and how you are connected to each other. A breach of a third-party email provider has a different impact than breach of a two-factor authentication vendor. Understanding the total exposure will help you define the risk associated with the breach, the actions you must take, and how fast you must move.”
“Understand your organization’s exposure” is no small task. In some cases, its too late to mitigate, in others, it could be a massive exposure waiting to be exploited. For example, if the business partner provides a billing service for you, all the records they posses about your customers may already be exposed. In another case of an application development provider, they may have connected access to critical assets in your organization that are now exposed to a new threat. In all cases, it is important to understand how you are connected to each other to monitor and mitigate any further proliferation of the breach.
Understanding the risk from a business partner whose “threat” value must now be seen as heightened post-breach, can be a very big project. Sadly, in many enterprises, even the layer 3 network diagram is not up to date to provide an accurate picture of partner connections, let alone a complete picture of access. And, as Adam points out, time is not on our side in this instance. Quick and effective response to this new threat is critical to limiting the propagation and impact from a partner breach. Understanding “exposure” from this threat is the key to this response.
Risk Analyzer is designed for just this purpose. With a threat in mind, understand the exposure of your network from this threat. Remediation activities like prioritizing vulnerability fixes, mitigation activities like blocking access to some connectivity until resolution is achieved and limiting impacts by actively monitoring (perhaps network recording) all access from the breached partner are all good responses if you understand your exposure. Getting a clear picture of what is exposed is still the first step.
Adam continues to discuss much more than just the technical next steps, including contract negotiation and breach disclosure steps. But heeding his advice to understand your exposure and act fast to limit the impacts are key in handling this situation.
Vulnerability…”I do not think it means what you think it means.”
Continuing our series of posts on Risk, I wanted to next shine a light on one of the most misunderstood or better yet, misused terms in security, vulnerability. What does vulnerability mean to you? How is it connected to Risk?
While vulnerability is certainly part of any risk analysis the term has been co-opted out of all proportion to most of the security and risk management space. This is partly due to the great job that the vulnerability management and patch management vendors have done in bringing vulnerabilities to the forefront of our risk management activities. But as we said in our earlier post, there is more to Risk than vulnerability.
Rather than reinvent the wheel I wanted to go back to what many consider a seminal piece on the subject. Jack Jones’s, An Introduction to Factor Analysis of Information Risk (FAIR). Jones perhaps said it best when he wrote,
A ﬁnal point is that there’s a tendency to equate vulnerability with risk. We see a frayed rope (or a server that isn’t properly conﬁgured) and automatically conclude that the risk is high. Is there a correlation between vulnerability and risk? Yes. Is the correlation linear? No, because vulnerability is only one component of risk. Threat event frequency and loss magnitude also are key parts of the risk equation.
So, in spite of this, why have so many gone off the deep end on vulnerabilities? I imagine it is due to highly publicized and severe vulnerabilities that keep being disclosed on a frequent and regular basis along with the fact that it is the best “measured” factor in security today (see CVSS). Using a baseball analogy from Moneyball, measuring vulnerabilities to infer risk out of context from threats, other security countermeasures, and other risk factors is similar to tracking the stat “at bats” as a key metric to measure wins. Related, yes. Direct correlation, no. Just because there is a vulnerability doesn’t mean it will be exploited, that it can be reached and it is worth exploiting. So in measuring risk, it is critical to measure more than just vulnerabilities.
I am not suggesting we stop assessing and measuring vulnerabilities. However, with risk based products like our Risk Analyzer, I hope we start including some of the other factors that need to be included in our analysis so that we can start measuring risk more completely.
Vulnerability is not Risk. Inconceivable!