Archive for the ‘Uncategorized’ Category
If you left your car unlocked with valuables visible in the front seat, would you blame the car manufacturer if someone stole those items? I doubt you would and I seriously doubt anyone would listen if you tried. But a recent US Federal Court of Appeals ruling in the case of Patco Construction. v. People’s United Bank might indicate that yes the car manufacturer, or in this case a bank, is liable.
The case revolves around the plaintiff whose credentials and account information were compromised. The cyber thieves were then able to login to the bank’s site and initiate several transfers totaling more than half a million dollars. When the fraud was reported the bank was able to recover about half of the stolen funds, but they refused to refund Patco the rest of their stolen money.
In this case the bank’s position was that Patco was negligent enough to have their passwords, account info and usernames stolen. Why should the bank bear the cost of Patco’s mistake? The bank said that Patco should be responsible for its own losses. It is hard to argue against the Bank’s position. Why should they bear the loss, when they had nothing to do with Patco exposing their credentials?
But Patco is understandably upset given the size of the loss and they have a valid concern that the Bank’s own internal systems flagged the transactions as suspect and yet didn’t stop them. Patco’s argument is that regardless of why or how the fraud was initiated, the bank must provide commercially reasonable controls to prevent fraud. The court was persuaded by this argument.
For now, the case has been sent back down for adjudication and suggested to both parties that they try to settle this before a verdict. But the case itself brings up a bigger issue:
When does an organization or individual have to take responsibility for its actions on security? If the bank were held to be liable in the Patco case what message does that send? One could say that the message is don’t worry too much about your online banking credentials because at the end of the day if anything bad happens the bank is liable anyway. I don’t think that is the message we should be sending. How can we expect banks to take this exposure on without figuring that risk into the fee equation?
No doubt there are many instances of negligence and poor security where consumers should hold the failing institution liable for loss of money or information. But there must be some shared responsibility of security.
Symantec recently published their 2011 Threat report. I always find this an interesting read and a worthwhile read. But I have a pet peeve with a common mis-classification of vulnerabilities as threats. These two items are distinct, but related items: threats will often exploit vulnerabilities to achieve their goal. While related, vulnerabilities are not threats and mixing the two confuses the conversation.
Symantec is not oblivious to this fact as they carefully keep this distinction in mind when writing this report. In sections referring to vulnerabilities, they do not mix the term vulnerability and threat. But the report title is “Internet Security Threat Report”. Using the number of vulnerabilities discovered in 2011 as a metric to indicate the Threat trend is not appropriate.
There is still a place to discuss vulnerabilities in this report; in particular what types of vulnerabilities are being targeted by threats is a very interesting analysis I would like to see more of. But remove the vulnerability count off the headline graphics and don’t use it as a measure of threat. Vulnerabilities are not threats and they are not risk. Vulnerabilities are weaknesses potentially exploited by threats.
I was watching a video from Cloud Passage earlier today about their new Beta for Windows Firewall management: Halo for Windows. I don’t mean to take anything away from their work and I think it is a good new offering. But something jumped out at me near the end of the video that the administrator in the video only chose to log “drops”. Why just the dropped traffic?
I hear this fairly frequently from people that choose to only log drop traffic, since it represent the “bad” traffic and they can send these logs to their SIEM to get alerts on these dropped connections. Particularly when performance of logging is a concern and administrators want to reduce the performance impact by reducing their logging, they will turn logging off on highly utilized rules where they *know* what traffic is flowing through those rules. But, they continue to log ALL their dropped traffic. This is completely wrong.
Logging dropped packets does two positive things for you:
- It allows you to verify your technology is actually working (confirming that the millions of dollars you spent of your firewall is actually doing something)
- Identify attacks that failed
I don’t dismiss there is some value in #2, to build up a repository of threats. And, it can aid in discovering malware inside your network and a few other good uses. For this reason, I still strongly encourage logging many drop rules. But remember, this traffic FAILED. The preventative technology (firewall, IPS, etc) succeeded. As for the first case, if you don’t trust the technology, don’t buy it. And certainly don’t use this count like a scoreboard of security success. The fact that you successfully blocked traffic is not proof of security…no matter how many things you drop. This is not a security success metric!
Instead, if you care about security, you should be logging your accepts. This is the traffic that can represent an actual risk to your organization. This is the traffic that successfully passes through your security defenses. There is a ton of value in this data:
- Forensics review after a breach is discovered to learn when it started and how long it lasted
- Threat alerts when known bad actors are SUCCEEDING in accessing resources in your organization
- Anomaly detection when there is an unexpected spike (or drop) in typical traffic behavior
This attitude to log all dropped traffic has been promoted by just about everyone. Starting with the firewall and IDS vendors, who want to show “value” by logging dropped traffic (“look, see, I dropped another attack!”). And it is promoted by standards that say almost nothing about what a firewall policy should or should not do, but will nearly always include a recommendation to “include a clean up rule and LOG it”. I don’t disagree with logging cleanup rules. But this is not nearly as important as logging successful access. In the case of the drop, you already succeeding in thwarting the attack, the log is of little additional value. In the case of an accept, it is worthy of some additional scrutiny.
My suggestion…log all accepted traffic and reassess which drop rules you want to log.
[NOTE: in the Halo example above, since it is a host-based firewall, there can be limited value in logging the http accepts to the local web server since the web server should be logging connections as well. This video just happened to get me thinking about this topic this morning.]
Well, another week at the RSA Conference is in the books. I must say that this was the best conference that I’ve been to in many years. I was thrilled to see the security industry back and stronger than ever. After a few slow years, the conference was packed and there was excitement in the air (our friends Alan Shimel (here) and Mike Rothman (here) agree).
Of course, we saw the mega-trends (cloud, virtualization, big data) in full force. But I was struck by how strong the firewall segment of the industry continued to be. It was good to see our friends at Juniper, Check Point, McAfee and Fortinet be so well represented with big booths and even bigger attendance. Next-generation firewalls continued to have a lot of buzz around them, led by our partners at Palo Alto Networks, and it was exciting to get a closer look at the newest entry into the enterprise firewall market, a datacenter firewall from our long-time friends at F5 Networks.
What I took away from the conversations that I had, including leading a panel discussion on the state of firewalls to a packed house of 600 (more here), was that firewalls continue to have an important place in the network. And I say that for a very practical reason, because I realize the we could secure every host on the network individually. But the explosion of computing power that has led to incredibly dynamic, ever-expanding virtual datacenters has further solidified for me that we need a common place to enforce our access controls – and the network is the right place to do that. Now, how we enforce controls will change (purpose-built firewalls are quickly becoming a reality), and you should choose the right tool for the job given the particular problem you face. But there is still a great economy of scale to controlling a few ingress/egress points instead of managing a policy on every host.
The other theme that I heard from the folks who stopped by our booth was that they were overwhelmed by the vulnerabilities on their networks. One gentlemen confided in me that he had 85,000 hosts on the network and even more vulnerabilities than that. I showed him our new Risk Analyzer product, and how it could map those vulnerabilities in context of the network security protections he already had in place and measure the true risk of exposure from his threat sources. My message to him and others was simple: stop managing vulnerabilities and start managing risk.
That’s the vision behind our Risk Analyzer product. Risk Analyzer is a proactive, complete network attack simulation and risk measurement solution allowing you to assess the security of your most valuable assets. Ready to change your perspective on network security? Learn more about Risk Analyzer at http://www.firemon.com/riskanalyzer.
In a recent article on NETASQ, Richard Stiennon provides a “brief history of firewalls and the rise of UTM”. I have a couple problems with his conclusions. However, Richard does a good job describing how the firewall market has evolved over the years and gives his analysis of where it is heading. He has had a ringside seat to this history and provides a good 15+ year history in 3 paragraphs bringing us up to today with the recent appearance of Next Generation Firewalls.
My first minor disagreement with Richard’s view and definition of a UTM. I do agree the “bad reputation” UTM received in the early days was well received. In my view the UTM market was created by a class of firewalls attempting to disrupt the established firewall vendors by throwing more features in the same box. These solutions lacked effective management and did not scale to enterprise needs. Because of this, UTM has become synonymous with “SMB firewall” in my view. For this reason, I don’t consider a Check Point firewall with an IPS “blade” a UTM any more than I considered the Check Point firewall with VPN functionality in 1998 a UTM. And I certainly don’t consider a Palo Alto Networks firewall a UTM. The advancement of the NG firewalls was a new way to manage access (users and applications) not another commodity security product consolidated (crammed) onto the same box.
But that critique is pretty petty as it is just a name. Richard’s basic history that the firewalls of today do more than the firewalls of yesterday is true. And it is part of the reason that the demand for FireMon and firewall management solutions in general continues to increase. New security functionality does not necessarily translate into better security. It must be effectively managed.
Here is my major critique: the history of consolidating security functionality into the firewall is not necessarily the path of firewall innovation in the years ahead. The market drivers of data center consolidation, virtualization and cloud computing are changing the role of the firewall. But, unlike Richard, I don’t think this necessarily means stuffing more into the firewall. In fact, it may mean just the opposite: purpose-built firewalls for purpose-demanding situations.
Take for example, the web-hosting DMZ infrastructure. We wrote about this not long ago here. A general-purpose firewall only controlling http access between users and web servers is not doing much but slowing down access and barely tapping the capability of the firewall. However, a firewall with specific knowledge of web access embedded in a load balancer could be very interesting in this scenario (see F5′s recent announcement).
And virtualization is another fast-moving market that will stretch the bounds of firewalls. While embedding switches in firewalls has been around for a while in UTM devices and will remain a key feature of these SMB devices, it is much more likely for security to get integrated into switches in the enterprise. Last week, Nicira made public their recent work to manage virtual switches (great read: http://nicira.com/en/platform-for-innovation). One advantage of creating a software abstraction layer above the physical wires is that it allow “rules” to move with the virtualized network port. And they are certainly not alone. Juniper has similar visions and commented on Nicira’s work here. And Cisco has similar visions with their Nexus 1000v.
But this dynamic network of controlling access per port (and a port that moves around in the network as VM’s move) is not asking for some new type of security similar to a cramming a new feature into a UTM. The vision of the dynamic network is demanding dynamic management of security technology we already understand: control access based on application, user, port, protocol and networks. It is also demanding high-performance, not frequently associated with bloated UTM features.
In all cases, complexity is increasing. Whether it is the increase in features being added to firewalls or the demand to control access at a more granular level, the complexity of the technology is increasing. And this increase in complexity will demand new management solutions. As great as the new technology is, unless it is properly managed, it won’t provided the intended security.
FireMon announced today, what we here in the company have known for a while now, 2011 was a spectacular year for us at FireMon and we are so proud that we want you to know. Some of the highlights of 2011 for us:
- 50% year-over-year sales growth in firewall management solutions to enterprises while continuing to achieve profitable business operations for the year.
- Dramatic new customer growth with over 80 new Fortune 1000, government, healthcare, financial services and service provider customers including enterprises like Accor, American Automobile Association, and EarthLink and MSSPs like GBprotect.
- Acquired Saperix Technologies and patented risk analysis software developed at MIT Lincoln Laboratory that quantifies risk by identifying both critical threats and the most effective countermeasures.
- Expanded international operations in EMEA and Asia-Pacific with new executives and technical resources in the United Kingdom, France, Germany and Australia.
- Increased focus on business and technology partnerships with the addition of proven security industry channel management and business development executives.
The best part of all of this great news though is that we think this really positions us to make 2012 an even better year. We are poised to really take off as a result of additions to both our team and product line up.
As the people who virtually invented the firewall management space, we are very excited to see firewalls, through the introduction of “next gen firewalls”, become hot again. Security device management and scenario based risk management are two of the most important issues facing organizations. We are uniquely situated to offer solutions and answers to these issues.
In the meantime congratulations to everyone on the FireMon team for a job well done. But most of all a huge thank you to our customers and partners, for without you all none of this would be possible. Thanks to all of you and here is to a great 2012!
Adam Ely wrote a nice article on Dark Reading (“Tech Insight: What to Do When Your Business Partner Is Breached”) about how to respond when you become aware of a breach of a business partner. He discusses a very broad array of activities and responses you should consider immediately, on-going and post a breach.
One thing that jumped out at me was the brief mention of understanding your organization’s exposure. Adam wrote,
“As you’re starting to piece together what occurred, it’s time to understand your organization’s exposure. You’ll need to fully understand what service the partner provides to your organization, the data it possesses, and how you are connected to each other. A breach of a third-party email provider has a different impact than breach of a two-factor authentication vendor. Understanding the total exposure will help you define the risk associated with the breach, the actions you must take, and how fast you must move.”
“Understand your organization’s exposure” is no small task. In some cases, its too late to mitigate, in others, it could be a massive exposure waiting to be exploited. For example, if the business partner provides a billing service for you, all the records they posses about your customers may already be exposed. In another case of an application development provider, they may have connected access to critical assets in your organization that are now exposed to a new threat. In all cases, it is important to understand how you are connected to each other to monitor and mitigate any further proliferation of the breach.
Understanding the risk from a business partner whose “threat” value must now be seen as heightened post-breach, can be a very big project. Sadly, in many enterprises, even the layer 3 network diagram is not up to date to provide an accurate picture of partner connections, let alone a complete picture of access. And, as Adam points out, time is not on our side in this instance. Quick and effective response to this new threat is critical to limiting the propagation and impact from a partner breach. Understanding “exposure” from this threat is the key to this response.
Risk Analyzer is designed for just this purpose. With a threat in mind, understand the exposure of your network from this threat. Remediation activities like prioritizing vulnerability fixes, mitigation activities like blocking access to some connectivity until resolution is achieved and limiting impacts by actively monitoring (perhaps network recording) all access from the breached partner are all good responses if you understand your exposure. Getting a clear picture of what is exposed is still the first step.
Adam continues to discuss much more than just the technical next steps, including contract negotiation and breach disclosure steps. But heeding his advice to understand your exposure and act fast to limit the impacts are key in handling this situation.
Vulnerability…”I do not think it means what you think it means.”
Continuing our series of posts on Risk, I wanted to next shine a light on one of the most misunderstood or better yet, misused terms in security, vulnerability. What does vulnerability mean to you? How is it connected to Risk?
While vulnerability is certainly part of any risk analysis the term has been co-opted out of all proportion to most of the security and risk management space. This is partly due to the great job that the vulnerability management and patch management vendors have done in bringing vulnerabilities to the forefront of our risk management activities. But as we said in our earlier post, there is more to Risk than vulnerability.
Rather than reinvent the wheel I wanted to go back to what many consider a seminal piece on the subject. Jack Jones’s, An Introduction to Factor Analysis of Information Risk (FAIR). Jones perhaps said it best when he wrote,
A ﬁnal point is that there’s a tendency to equate vulnerability with risk. We see a frayed rope (or a server that isn’t properly conﬁgured) and automatically conclude that the risk is high. Is there a correlation between vulnerability and risk? Yes. Is the correlation linear? No, because vulnerability is only one component of risk. Threat event frequency and loss magnitude also are key parts of the risk equation.
So, in spite of this, why have so many gone off the deep end on vulnerabilities? I imagine it is due to highly publicized and severe vulnerabilities that keep being disclosed on a frequent and regular basis along with the fact that it is the best “measured” factor in security today (see CVSS). Using a baseball analogy from Moneyball, measuring vulnerabilities to infer risk out of context from threats, other security countermeasures, and other risk factors is similar to tracking the stat “at bats” as a key metric to measure wins. Related, yes. Direct correlation, no. Just because there is a vulnerability doesn’t mean it will be exploited, that it can be reached and it is worth exploiting. So in measuring risk, it is critical to measure more than just vulnerabilities.
I am not suggesting we stop assessing and measuring vulnerabilities. However, with risk based products like our Risk Analyzer, I hope we start including some of the other factors that need to be included in our analysis so that we can start measuring risk more completely.
Vulnerability is not Risk. Inconceivable!
The end of the year is always a crazy time at FireMon and likely most companies. We are busy wrapping up end of year business while making plans for next year. Even with all the activity, I can’t help looking back at the year that was. And what a year it was!
It seems like many years have passed since February when we changed our name from Secure Passage to FireMon. Since then we acquired a powerful Risk Analysis technology, doubled the number of employees, opened offices in London, France, Germany and soon Australia and released Risk Analyzer. And once again we have set a new company record for anual sales. It has been a great year!
Thank you to all our customers and to all the great people at FireMon that make it happen. It is great to work with such talented and quality people. I look forward to seeing everyone next year!
Happy New Year! 2012 is going to be fantastic!
Johnnie Konstantas over on Security Week has the first of what looks like a series of articles posted on what she calls Firewall Wars 2.0. Johnnie recounts that back in the day, the big fight was between stateful inspection firewalls and proxy-based firewalls. I remember those days well and agree there is a parallel to those days. However, I don’t think time only one has to win.
Konstantas now suggests we are in a new era of firewall wars and I tend to agree. The “Next Generation” firewalls promoted by Palo Alto Networks and followed by many of the traditional firewall vendors has begun to shake up the market. I don’t agree with Konstantas assessment of what constitutes a “Next Gen” firewall however. She seems to lump them into the UTM category, which I think understates both the UTM and the NG firewall capabilities. The genius of the Next Gen firewall (in my opinion, of course) is that it took much of the capability of an IDP to recognize and categorize layer 7 traffic and managed it in a “positive security model”. Unlike IDP’s that block traffic identified as bad, the NG firewall identifies and only allows traffic deemed acceptable. Slight shift in technology application, gigantic shift in behavior. And while it is a great advancement for certain situations, I don’t think it immediately makes stateful inspection firewalls obsolete.
What I liked best about Konstantas review of the topic was the recognition that not all products are created equal AND not all situations require the same solution. Security needs and performance requirements should be key factors in making a decision. Not all situations call for NG firewall capabilities or UTM functionality. In fact, I would suggest, not all locations call for a dedicated firewall, in some locations a firewall feature set on a router may be a good fit.
As for the “war”, as budget cycles come around for firewall upgrades and migrations, consumers will have a lot more choice than they did just 3 years ago. I suggest it not be considered a Betamax vs VHS battle…there is room for NG firewalls, stateful inspection firewalls and even proxies all deployed in the appropriate location in the battle of network security.
Regardless of which firewall technology an enterprise choses to deploy (or if they deploy them all), they must be effectively managed. The best firewall technology won’t fix a poor configuration. A good management technology like FireMon Security Manager is the answer to make sure your firewall technology is effective.