SC Magazine UK had an interesting article by Dan Raywood. Raywood interviews Kevin Dowd of CNS Networks as he sets up a network that doesn’t use any network firewalls. The premise is that in light of the recent Night Dragon attacks as detailed by McAfee has the time of the perimeter firewall finally passed.

Dowd goes through a lengthy explanation that a firewall is not a silver bullet. That keeping network topography as simple as possible is key to making security more practical. Cutting services that are masked by firewalls but unnecessary is another suggestion. To sum it up Dowd recommends:

If you are going to deploy servers, laptops, desktops or any networked equipment the following rules are the simplest way to stay secure:

  • Work out the services you actually need
  • Work out who will need to access those services and create restricted access
  • Disable or remove everything else
  • Review the network regularly

Dowd and Raywood end the article saying:

We therefore believe that businesses should think about effective implementation of security controls to protect their networks and focus on proper internal security, from employee passwords to ensuring that each application and system is properly hardened. Done properly this could, with specific networks, mean that you could remove the need for a firewall completely.

So let’s start with what we agree with. A firewall is not a silver or magic bullet. There are no silver or magic bullets in security. If you think there are, you probably also believe a large Bunny is going to hide some colored eggs at your house this month.

We also agree with KISS – keep it simple stupid.  Adding complexity makes your network … well, more complex. However, on this one you have to realize that sometimes complexity can’t be avoided.

Overall though, my real problem with Dowd and Raywood’s article is just because a firewall isn’t perfect doesn’t mean you throw the baby out with the bath water. I am not aware of anyone saying that by using a firewall you don’t have to use other security technologies. Also using a firewall does not give you an excuse to leave your common sense at the perimeter. In general, I absolutely do not disagree with the premise that an over reliance on firewalls is foolish and would severely put the network at risk.  But using them as a core and central device to limit risk is an appropriate implementation.

Specifically though I have a few more comments on this experiment that Dowd and team ran. They tested the theory by running devices connected directly to the Internet.  In the experiment, they limited what services were running to only those necessary and only to known and trusted hosts.  This is very good practice.  However, in even a moderate size network, this implementation would contradict one of the primary concerns which is complexity.

Consider the scenario where a DMZ is established to permit access to public servers.  It is not possible to limit the services to only the publicly available services as it is necessary to monitor and manage the servers.  So, management services are enabled such as SSH, SNMP, backup services, etc.  Each service increases the attack surface.  Anderson (one of the team leaders in the article) addresses this by using native host firewalls to limit access to services from only known hosts.  Whether he is referring to actual host firewalls (such as iptables) or application specific configuration parameters (such as Apache access control) is unclear, but he is referring to access control filtering at the host level.  So, to effectively control access, every host must be individually configured to define access.  While it may seem an easy solution to standardize configurations as a solution to this problem, it does not address the reality of how these devices are often used.  Different groups of users require different access to different servers.  A traditional network firewall is better suited to address these problems.  Grouping similar devices together into the same rules allows for convenient and easier to manage access rule definitions.  Unique access can be handled as appropriate.

Another particularly interesting fact about this article, is that Anderson is not recommending a firewall-less network.  In fact, Anderson is instead recommending a firewall for every host in the network.  This unfortunately is too complex for most environments beyond the most simple of networks.  And, it reinforces the basic premise that firewalls should be deployed as an effective tool to limit risk to an organization.

In conclusion, though we always welcome discussion around appropriate use and management of firewalls and innovative ways to secure the network, just because a particular attack was successful is not a reason to take up the “throw the firewalls out” chant.