Archive for the ‘Firewall Management and Security News’ Category
Last week I spoke at the United Security Summit about operationalizing risk into everyday security operations (and had some fun with song parody titles along the way as evidenced by the photo attached to this post). The talk focused on the different elements required to answer the only question that really matters: what assets are truly at risk in your network right now? One of those elements that I highlighted was configuration management.
Configuration Management has traditionally been pitched as a tool that can help eliminate mistakes and downtime within your network. That certainly is one of the benefits that configuration tools provide. However, I would argue that configuration tools are a risk management tool, particularly on the network and network security side of the house. If a router admin adds an ACL that suddenly opens access to an internal network from outside networks, that is a huge risk to the network. If a firewall admin mistakenly pushes an overly permissive policy that permits any source and service to an internal network, you need to be alerted to the risk. As I noted in my talk, ideally, your configuration tool also inter-operates with your visual attack tool, and updates the attack topology continuously and in real-time as these changes are made to the network and network security devices in your environment.
I also noted that there are others doing great work around this idea of operationalizing risk, or building a risk platform. Securosis has an amazing white paper discussing building a vulnerability management platform, and all of the elements needed to truly address risk in your environment. As they note in their paper, “There really shouldn’t be a distinction between scanning for a vulnerability and checking for a bad configuration. Either situation provide an opportunity for compromise.” Don’t open up your environment to potential compromise; be sure to include device configuration management as part of your day to day risk operations.
Roger Grimes and I have engaged in a very interesting conversation around the necessity and value of firewalls. Yesterday I took issue in my blog post with Roger’s initial claim that the firewall is dead. In response, Roger continues his argument in his post, The Firestorm over Firewalls.
Roger seems to have capitulated the argument on ineffective management and instead doubled down on two core points:
- 99% of all attacks are client-side initiated and the firewall is ineffective at protecting against these attacks
- The fact that the industry is not more secure is proof that the firewall is worthless
I still take significant issue with the argument that 99% of all attacks are client-side and Roger’s “proof” that anti-virus vendors block a lot of stuff is not compelling to me. Remember firewalls “block” a lot of stuff too with billions of logs of dropped traffic generated every second world-wide. Neither of these points is sufficient to make or dispel the 99% claim. The Verizon Data Breach Investigations Report I referenced is also not perfect, as Roger points out, as it only covers a minority of all attacks worldwide. But it is the best source I am aware of, so I think it is still worth referencing. And pointing to a sample graphic ( on page 8 ) meant to describe a documentation standard as “proof” that client-side attacks are responsible for all breaches is not very compelling either. Especially since it was not written to support this point in any way. However, even if we do acknowledge Roger’s “proof” graphic on page 8, take a look a the paragraph describing it that claims an egress filter (firewall) could have prevented the breach and it seems to dispel Roger’s obituary of the firewall.
But let’s set statistics aside. I imagine there are plenty of other people who can more credibly respond to Roger’s unsubstantiated claim about 99% of attacks are client-side. And, I don’t mean to argue that client-side attacks are not an issue, I simply mean to claim they are not the only issue.
Instead, I would like to hypothetically accept Roger’s postion that 99% of all successful attacks are client-side. I would argue this change in attack vectors through the years strengthens the case that a well-configured firewall is an effective security control. It is a matter of attackers coming in through the open window instead of the closed door. The growth in client-side attacks suggest the direct attack is being successfully thwarted by the firewall and less effective solutions are being exploited.
The great thing about a firewall is that it employs a positive security model where only what you decide to allow is permitted and everything else is denied. When managed well, it makes it a great security solution. In contrast, malware detection and anti-virus software employ a negative security model where everything is allowed and only known bad attacks are denied. This creates a horrible cat and mouse game that the attackers seem adept at winning by staying a step ahead of the latest signatures. Which begs the question, if Roger’s argument is that client-side attacks are the real problem and the fact that we still have security problems is justification to “kill” a technology, why does he pick on the firewall. Shouldn’t he instead have called anti-virus or anti-malware or some other client-side technology dead instead? The firewall, according the Roger’s own logic, is the one technology in the game that is working.
The firewall isn’t dead. In fact, I think Roger’s arguments strengthen the case the firewalls are working. Are they perfect, no. Are they sufficient to solve all the security problems, no. Should we get rid of them because they are not perfect, NO!
And this gets to the heart of the matter: the fact that there remain security issues in information technology is not a matter of one technology working or not. It is not justification to call an effective technology “dead” because it doesn’t solve everything. When effectively managed, the firewall is a very effective security solution. Additional capabilities in NG Firewall technology continue to make it a relevant and central part of a security solution. It should not be considered THE solution, but it certainly shouldn’t be discounted either.
Today Roger Grimes posted an article on InfoWorld about the overdue death of the firewall: Why you don’t need a firewall. His case rests on two primary arguments: 1. The firewall doesn’t protect against modern day threats, specifically client-side vulnerabilities and the fact that all apps run over port 80 and 443 that can never be blocked in the firewall and 2. The firewall is managed so poorly that it causes more problems than it solves.
Let’s separate these two points to more logically discuss each, starting with the value of a firewall in today’s threat environment. I take significant issue with his statement that, “Today, 99 percent of all successful attacks are client-side attacks”. This is not substantiated by any research for good reason; it isn’t true. The Verizon Data Breach Investigations Report actually discusses successful attacks in significant depth and completely invalidates this point. It reports that 81% of all attacks and 99% of lost data is a direct result of “Hacking”. It goes on to specify that access to remote services (e.g. VNC, RCP) “combined with default, weak or stolen credentials” account for 88% of all breaches. The assumption that 99% of attacks are client-side is dead wrong.
With remote access to services remaining the greatest attack vector today, firewalls still play a very significant role and are changing dramatically. It would also seem that Roger is ignoring new advancements in firewall technology. NextGen firewalls are specifically adept at helping prevent the client-side attack. No longer is port 80 and 443 an open highway of access through which everything can pass. User-based and application-based policies permit effective control of outbound access.
Roger’s second point, on ineffective management, is something which I agree is a problem, but don’t agree with his conclusion. His argument that ineffective management, where rules are created that permit nearly all access renders the firewall useless, is absolutely correct. Ineffective management that leads to poor configurations is a problem that can turn the best firewall technology into nothing more than a router passing all traffic. But his conclusion that this means the firewall should die is a really bad leap in logic. Poor management is not cause to kill the technology. Instead, I propose more effective management.
FireMon has been dedicated to this very idea of better firewall management for over a decade. Ineffective firewalls are not a caused by bad technology or incapable administrators. It is a problem with management. A stream of 1,000 logs per second won’t make any sense if a human tries to process their meaning while staring at a screen, but with some automation of log analysis, they can provide a wealth of information. 500 complex rules in a single firewall policy may be nearly impossible to evaluate to understand what access is truly being allowed, but with a powerful policy analysis tool, it is a trivial exercise. Even Roger’s example of a poorly defined rule with “ANY ANY” defined due to missing requirements is a solvable problem with the right tools. FireMon provides a powerful Traffic Flow Analysis tool that analyzes traffic flowing through overly-permissive rules permitting retroactive correction of these problematic rules.
The firewall is not dead and won’t be. With next gen capabilities and effective management – which is possible and available today – the firewall will remain a critical component of security solutions forever.
Last week I had the privilege of speaking at the United Security Summit. I spoke about Risk Analysis, and compared the world of automobile traffic engineering and network security risk analysis. In my discussions with the attendees after the session, it was clear that two key elements are required to have effective and relevant risk analysis in the enterprise environments most of us work in today. For this post, I want to focus on the first key: context.
Having the full awareness and context of your network topology and the security controls that have been put in place are required to have a full understanding of your organizations true risk posture. Many times, when automobile traffic engineers are attempting to solve a congestion problem, they will lay data cables on a section of highway to get an idea of the number of cars, frequency and distance between units. However, they are only getting the raw data of the section of road that they happened to deploy the data cable to. They don’t see the entire 30 miles of the given highway, the on-ramps, or feeder roads leading to the ramps. The data cable does not account for weather, day of week, time, holidays, etc. There are many elements that go into understanding the full context of why traffic is congesting on a given multi-lane highway. Unfortunately, many decesions about resolving congestion are made based off the raw data captured from the deployed data cable, and don’t account for the full context of what is causing the congestion in the first place. This why the default response of adding more capacity many times doesn’t solve the problem.
Much like the traffic engineer, in network security, we tend to rely on our own data cable solution. Most enterprises today when assessing risk simply run a vulnerability scanner. In the large enterprise environments we now work in, this can result in a list of 1000′s of vulnerabilities that need to be addressed. Many times, the engineer looking at this result will simply choose what they think are the important patches to fix, and assume they have reduced risk to the organization. Without providing the full context of the entire network topology and the security controls put in place to control data flow, the vulnerability results have no frame of reference. If the results list a sql vulnerability on a high value web server as severe, one might assume this needs to be addressed immediately. However, the scan results aren’t aware that the firewall cluster fronting the web services prevent sql from coming to the web server from any internal or external source. So, is this truly a severe risk that needs to be addressed immediately? Without the full network context,vulnerability scanners by themselves are unable to truly give you an accurate picture of the risk to your network environment.
In our next post, we’ll discuss the importance of speed in network risk analysis.
Our very own president and CTO, Jody Brazil, was the guest this week on the Security.Exe podcast, hosted by security blogger/podcaster Alan (ashimmy) Shimel. Jody is a returning guest to the show, having appeared several times over the years on the popular podcast.
In this episode, Jody talks about recent developments at FireMon including the forthcoming Risk Analyzer product line. Additionally, Jody and Alan talk about the general state of security and recent events in the security industry. The interview is about 20 minutes long and well worth listening to!
Chris Hoff (@beaker) had a blog post up recently on his Rational Survivability blog, once again making the call for automating security (and compliance and audit as well) in both physical and virtual environments. This is a cause that Hoff has long championed and is certainly a worthy goal.
Chris believes that leveraging APIs is one of the ways we can help accomplish more automation. He pointed to an example of leveraging APIs in VMware that was written by Richard Park of Sourcefire and deals with automating via Perl and APIs, firewall functions in a VMware virtual environment to accomplish things such as:
- List the current firewall ruleset
- Add new rules
- Get a list of past firewall revisions
- Revert back to a previous ruleset revision
The idea of virtualized firewall ruleset management is of course something near and dear to us here at FireMon. We have thought long and hard about virtualization in general and virtualization of security in particular. Of course this is something that we think FireMon’s product suite will have deep capabilities around, especially as more network operations move to virtual environments.
Network management in the physical world has been sorely lacking in general and security device management even worse. Frankly, this is why FireMon has been so successful. The virtual world, as Hoff tweeted, does promise some relief in this area.
But fundamentally we believe that automation using API’s and such are only part of the answer. How do you script in context? Park’s post discusses the ability to programmatically create rules, which is cool, but the framework to automate change is much bigger than technically creating the rule. What rule to create? Who has permission? What defines the rule to create? What is the process to create it? Firewall administration tools have offered user friendly ways to create rules yet corporations routinely create the wrong rules, excessive access and then ultimately fail to manage them long term.
Perhaps when it comes to managing firewall automation and firewall management, just as in the physical world, in the virtual world the ability to automate in and of itself does not necessarily solve your issues.
Don’t get me wrong, some things are logical for automation … for example, if you are using VM’s to scale, each time you bring up a new VM for the same purpose as another VM, you may want to define the exact same policy, but for the new IP of the new VM. This makes sense, is very repeatable and is perhaps the only way to achieve the security and operational goal of virtualization.
But there is a lot that has to go into security management, both physical and virtual. Here at FireMon we spend a lot of time learning and thinking about it and we see an opportunity to address some of these challenges.
It seems every day another security vendor releases their version of the NextGen firewall. While Palo Alto Networks staked their claim to the NextGen firewall some time ago, everyone from Check Point to Fortinet have recently announced NextGen firewalls.
FireMon recognized the value and power of these firewall advancements and has been a partner of Palo Alto for some time, focused on providing management for this new technology. While NextGen firewalls offer significant and important new capabilities to the firewall technology, the management problem remains. No matter how great the technology, if it is ineffectively managed, it will fail to solve the problem.
There are a couple key advancements in NextGen firewalls worth noting: user-based access policies and application intelligence.
While most firewalls have provided user access control by requiring secondary authentication at the gateway, this was completely disjointed from the existing directory infrastructure and complicated to manage. As a result, it was not often implemented. NextGen firewalls, through directory integration, have the potential to change access management from IP-based to user or user group based access. This is a huge advancement, changing the paradigm of IP access control to user control. And in a world of mobile and wireless devices, this makes access control much more dynamic and effective security.
Application intelligence and the incorporation of that intelligence into the firewall policy helps address the reality of web applications and dynamic protocol / port use in malware and applications. Access policies can now be managed by application or application category. Not only does this address the desired control application use in the enterprise, it can help address malware that makes its way into the enterprise in any form (on USB drive, laptop, phone, etc). If the policy is effectively managed, malware that used to freely tunnel across open ports out of the network and potentially enable backdoor command and control capabilities will be denied, blocking a critical security issue.
But NextGen firewalls can’t solve the problem of poor management. Even these new capabilities don’t magically solve the management problem. In fact, in many ways, they create new problems in need of solutions. I am a big proponent of this advancement in firewall technology and we are excited to offer solutions to help address these new issues. Be on the lookout for a few posts addressing these issues and FireMon’s innovative solutions to help organizations manage the NextGen firewalls.
I have to admit, like most of you I was amazed to hear that in regard to the recent breaches on the PSN, Sony did not have a firewall in place. I guess one could say that is one way to solve the firewall management issue.
All kidding aside, I could almost understand not patching or updating their Apache software. Patching and keeping software up-to-date is a task that many organizations don’t do a great job of even though there are some great solutions out there. But putting in a firewall?
The most recent data I’ve seen says firewalls are installed in well over 95% of network installations. With even home cable and DSL routers having firewall software installed, I always assumed the small fraction of networks without firewalls had to be those small installs tucked back in some out-of-the-way corner. To think that a company the size of Sony would forego firewalls is mind boggling!
So, why would Sony not put a firewall in place? I would think it is a management issue (I know I am a bit obsessed with the “security needs management” theme lately). But, the same people who didn’t comment for almost a week after this incident, who have repeated incidents and breaches, who installed a root kit on their CDs a while back, just don’t seem to get the whole internet security/privacy thing.
Here is the lesson for the rest of us: Sony is going to pay dearly for this, I’m afraid. This could be a case that the authorities and courts use as an example. A defendant with deep pockets, sensitive financial data lost and it appears a negligent attitude and track record. I don’t think anyone is going to want to be the next Sony.
Go one step further though. Just putting a firewall in place is not enough. You need firewall analysis and firewall management if your investment in a firewall is going to be worthwhile. Just don’t stop at the firewall. Remember, security needs management!
Fresh off of the InfoSec Europe security conference I am still invigorated. From my prior post you can see that we had a great show speaking to existing and potentially new customers. But the thing I realized this week is that InfoSec Europe has become an anchor event for the security industry in Europe.
Between the SC Magazine Europe Awards, B-sides London, CSA event, not to mention the FireMon Welcome Reception (had to throw that in), the conference has spawned an entire security ecosystem of events around it. While I have my frustrations with trade shows being so product versus solution or information focused, they do draw a crowd. And this crowd of 12000+ attendees now has more than just a trade show in which to participate. InfoSec Europe had all the feel of a major event. As big as RSA or Black Hat, I think InfoSec Europe is where to be if you are a player in the security industry.
We had a great time meeting and greeting friends old and new. Also, our own Alin Srivastava was featured on Infosecurity Europe TV. You can see the interview below. Congratulations to the team at InfoSec for putting on such a great show. Looking forward to next year already.
In the meantime, the FireMon team will be at a number of other shows, including next week’s Check Point Experience Barcelona on May 4-5!
One thing though is that whether it be InfoSec Europe, RSA, Black Hat or even local ISSA conferences, what we hear from people is pretty much the same. As I wrote in my last piece, security needs management! Even if you have the tools, using them correctly and efficiently is a challenge. Taking these ideas we hear and conveying that back to the FireMon product team is one of the most important functions of us attending shows like this.
As we have written before, all of us here at FireMon are really excited about the acquisition we announced this week of Saperix Technology and their patented risk analysis technology. But it looks like we are not the only ones:
Lawrence Walsh of Channelnomics gives his thoughts on the deal. You can read it here. Our favorite quote: “Perhaps 10 years from now the security world will mention FireMon in the same breath as Symantec, McAfee and Trend Micro“. That would be nice, but we have bigger plans than that even!
Alan Shimel on his Ashimmy, After All These Years Blog wrote that we had leapfrogged the competition.
DarkReading covered the news here.
The Kansas City Business Journal also covered the story here.
Stay tuned for more news on this soon!
Authors Note: CRN did a nice piece covering the story too! You can read all about it at: http://www.crn.com/news/security/229402206/firemon-buys-risk-analysis-firm-plans-to-add-government-vars.htm
FireMon Leapfrogs Competition In Risk Analysis (ashimmy.com)
Secure Passage Integrates with SIEM (blogs.channelinsider.com)