Network Operation: Developing a Smarter Security Framework




As any network operator can attest, the words “firewall” and “security appliance” carry multiple connotations; some of which are flattering and others that are… not.

That being said, developing scalable and feature driven security devices is a difficult task, especially while trying to provide the best performance at the most competitive price.

FoF.network.11.14

Over the past few years, the number of enterprises that have migrated to hybrid datacenters and cloud architecture has increased dramatically, exacerbating underlying issues such as throughput, redundancy and administration.

As a result, today’s enterprise architectures are far more distributed than ever before – most often a conglomeration of multiple vendors, code versions and management methods.

Imagine being an operator responsible for multiple datacenter network security systems and having to integrate your security management methodology into a cloud environment.

This remains a daunting challenge not only due to many organizations’ inability to find critical staff or the sheer difficulty of centrally managing systems seamlessly, but also in achieving a high level of faith that everything will operate in the same manner after a code upgrade or activation of a new feature.

Since the rest of the networking space has already adopted horizontal scaling for hardware and software, why aren’t we following the same methodology for security? Security appliances are not carrier grade routers, nor should they be treated as such. Yet, the sheer number of features that enterprises require from their security systems often comes at the sacrifice of throughput, creating subsequent traffic flow issues across the network.

As a result, firewalls and other security appliances must evolve to operate as a piece of software on commodity hardware or a virtual machine to both scale horizontally and empower all the necessary features, regardless of their deployment location. A common, easily tunable API abstraction management layer will also be critical in reducing operational overhead for network engineers.

By adopting this mindset, security systems will provide a much higher level of accuracy for threat detection and mitigation, along with administration of rule sets, resiliency and throughput – all while reducing operational and capital expenditures. The rest of the network must communicate and share critical information, especially as we progress more and more into Software Defined Networking (SDN) and Networks Function Virtualization (NFV).

Yet, network security systems continue to operate as islands today.

To change this we must truly embrace the mindset that security is just another key service that operates within the chain that is the network. Only then can we can move forward in developing a more protective, unified, vendor neutral and architecturally agnostic framework.

Join The Conversation


We encourage you to share your thoughts, and we look forward to reading your comments. We invite you to subscribe to our blog to keep up with the latest posts of our new series.

David Mitchell

About David Mitchell

David Mitchell is Founder & CEO of Singularity Networks and serves as advisor to Dtex Systems and Versa Networks. Mitchell is also a former advisor to network security specialists Arbor Networks and FireEye. Prior to launching Singularity in 2014, David served as Lead Network Security Engineer at Twitter and held similar roles at Netflix, Yahoo and Time Warner Telecom. He also remains active in the operational security community, including participation in effort such as the Internet Engineering Task Force (IETF).

Future Considerations: Software Defined




If Software Defined Networking (SDN) becomes the open ubiquitous technology that I think it will, everything changes.

That sounds dramatic, but I believe that SDN will change many aspects of how we deploy and manage networks. It also creates a completely new paradigm for security enforcement and an opportunity to think differently.

I think it will be amazing for people, for the industry, and for everything we try to do in security. It will power an Internet of Things (IoT) and forever elevate the value of data anytime, anywhere. I see SDN as the next critical step that no one will ever know happened.

SDN

When is this amazing change supposed to happen, you ask? It’s already started and it will be ongoing for many years to come. It’s not something where you can just flick a switch and suddenly it’s all there and running; there’s still lots of work to do.

But we can flick the switch ahead of time when thinking about how to build SDN strategy, and ultimately a secure one. To do this, you have to drop all current expectations of the technologies that you’re running today and think about what SDN is meant to change at all levels.

To get in the right state of mind for this exercise, consider a situation where you’ve been running a library for many years. It’s stacked full of books, magnificent collections for anyone to access and read via a book tracking system that you’ve spent millions on, essentially putting the Dewey Decimal System online.

Then tragedy strikes one night and the entire collection, along with the building, burns. The insurance money comes in and we are left with a real question. Does it make any sense to rebuild a building full of books, knowing what we already know about technology? Is there still a place for this? Before, due to a long history of value, this option was assumed, but when presented with, or in fact forced to recreate the library, does the design and deployment of a building of books make any sense?

I ask this question because you have to go into SDN with just that frame of mind. Ask yourself if what you’re doing today makes any sense in this new design, then go a step further. Ask yourself what you need to do to empower SDN instead of looking at it from the perspective of how it might work based on how you do things today.

What It Takes

Let’s flick that switch now and consider how SDN is evolving the network by walking through an SDN-enabled infrastructure from network to application.

SDN extracts network intelligence directly from switches into a centralized controller. This controller contains all the objects in the environment, from switches to applications, and everything between. The controller can send commands like “put, get, forward, delete, etc.”, as well as take in data about the state of any forwarding tables (and that’s without getting into the technical details, which is another blog unto itself).

Consider a network where you can make forwarding decisions based on far more than IP data. I’m talking about simply knowing where the connection needs to be and forwarding it across any infrastructure to any application, against any security controls that you may need. Maybe you rewrite the IP header as it moves across physical connections, but that’s not even necessary to consider when working with SDN as the process is abstracted away from us.

Think about what you could do with the power to forward packets based on a myriad of possible scenarios from network to application, and being able to track and protect that flow on demand. Running out of CPU and memory in one datacenter? Send the flow over to another. That one getting tapped out? Push it out into a cloud infrastructure.

New version going online of your application? No problem, as the next flow will be directed to the virtual machine running the new code. Problem with the new code? OK then, the next flow goes back to the previous version service, all on demand and orchestrated. I can’t wait to see the creative things that people do with this level of program-ability and control.

The Security Perspective

How is security affected by all of this?

For starters, it’s simply abstracted to a service with policy eventually moving into orchestration of that service. Don’t get me wrong, security policy management remains relevant, but it moves from a dictated security policy to a monitored security policy, just not right away. And over time, traditional enterprise security policies will become less relevant. To show you what I mean, we can jump ahead to the concept of a monitored policy as part of this exercise.

Let’s say that an application request comes in the form of a network call to a Web service to return data for a custom application, perhaps a new wearable armband health application. The network then checks its table to see where to send the connection, tags it accordingly, and forwards it on. In turn, the controller knows an application request is on its way for this particular service, and most likely already has a server up and running, ready to service it.

Since the controller knows how many clients it can service per virtual machine, with defined CPU and memory, it keeps spinning up new virtual machines and redirecting traffic accordingly. To include security in this process becomes a simple task. There’s no need to deploy hardware and create choke points as security simply becomes another application to the abstracted network.

For example, we can forward the data based on any decision, not just the network setup, and offload a copy for traffic validation; essentially run an on-demand security scan on the same flow and let the controller know if there’s a problem. Based on the orchestration decisions, the controller can have the traffic flow quarantined, blocked, redirected or just plain dropped; how and why will be tied to the value and risk of the service.

This is the point where we move from security policy management to security policy monitoring. As applications are defined and brought online, information will be collected on what data is handled by which users and corresponding threat scanning can scale up or down accordingly. It will be this on-demand delivery of security services that will enable rapid scaling of new applications.

While excited about all these the possibilities, I’m fearful of the potential nose dive that could occur if vendors try to create some form of lock in. SDN as a technology can’t be stopped by this and will emerge no matter what, it’s just a matter of how long it takes. Being realistic, it’s just going to take a few generations of equipment to get there.

However, if we truly enable SDN from networks, along with security, and into the application, many of our current challenges go away. Not to say we won’t have new issues to consider, but I’ll save that discussion for another time.

Join The Conversation


We encourage you to share your thoughts, and we look forward to reading your comments. We invite you to subscribe to our blog to keep up with the latest posts of our new series.

Kellman Meghu

About Kellman Meghu

Kellman Meghu is Head of Security Engineering (Canada and Central US) for Check Point Software Technologies Inc. His background includes almost 20 years of experience deploying application protection and network-based security. Prior to joining Check Point, Mr. Meghu has held various network, VoIP and security engineering roles with European telecommunications giant Alcatel, Electronic Data Systems (EDS), and as a private consultant.

Viral Video: Of Access Control, Hyper-Segmentation and Vendor Viability




Even after a handful of insightful blog posts from a wide range of experts, along with some related research, the question still looms large: what is the future of the firewall?

In this installment of the series we switch over to podcast/video mode, with FireMon Founder and CEO Jody Brazil joined by leading industry expert and Securosis Analyst Mike Rothman to discuss and debate the matter at hand. (Click link to view video)

Will firewalls break out of the box? How will the trend toward hyper-segmentation influence the process? And where does the added capability of the NGFW model factor into it all? Importantly, will the leading firewall providers of today advance their capabilities to address tomorrow’s requirements, or be replaced by someone else?

These are just a few of the issues that Jody and the ever-outspoken Mr. Rothman bring to the table.
After you’ve watched the video on YouTube, we hope that you’ll head back to the blog to offer your comments or contradictions.

Do these guys have the answers or does the debate introduce even more questions to consider?

Join the discussion!

Join The Conversation


We encourage you to share your thoughts, and we look forward to reading your comments. We invite you to subscribe to our blog to keep up with the latest posts of our new series.

Matt Hines

About Matt Hines

Matt Hines leads product marketing efforts at FireMon. Prior to joining FireMon, Hines held similar roles at TaaSERA, RedSeal Networks and Core Security Technologies, and worked for over a decade as a journalist covering the IT security space for publishers including IDG, Ziff-Davis, CNET and Dow Jones & Co.

Future of the Firewall: And Now… Hard Data




Over the past few weeks you’ve been reading a lot of different perspectives in this space regarding the “Future of the Firewall” (and if you haven’t please see the related archive).

In these posts authored by leading practitioners, analysts and industry experts, and those blogs that will follow, there’s been a lot said about the critical role that firewalls have played in the evolution of network security, and how they will continue to shape the future.

To help put these concepts and opinions into context, we’re happy to announce that we’ll shortly publish the results of a related research project, the State of the Firewall 2014 Survey.

For those interested in a sneak peek into the data to be presented in the report, please join our related webcast scheduled for tomorrow (or if reading his at a later date, click on the same link to hear a recorded version and/or receive a copy of the report, once published).

What did this survey of 720-plus practitioners reveal?

Among the breakout results, that:

Firewalls remain highly strategic despite pervasive management challenges, driven by a wide range of requirements, most notably API integration and NGFW capabilities.

Despite the continued evolution of network security solutions and methodologies, existing firewall infrastructure remains a key component of overall security strategy and will remain so in the future; at the same time, related management issues remain a significant challenge. To note, roughly 96 percent of respondents indicated that firewalls remain a “critical” element of overall security architecture, with 92 percent citing the devices as a central component of their plans over the next five years. Meanwhile, some 52 percent noted existing management concerns, led by firewall rules/policy complexity. Perhaps most surprisingly, when taken as a whole, firewall buying decisions are now influenced as much, if not more, based on matters of API integration and NGFW capability than on aspects of price or performance.

Next-generation devices are seen as critical and being adopted gradually, with a wide range of intentions and a broad set of related management and migration concerns.

As highlighted in the preceding conclusions, ongoing adoption of NGFWs and the related feature set is being approached incrementally as a practical element of the same value-based buying process leveraged in acquisition of traditional firewalls. While 42 percent of respondents indicated that NGFWs still make up less than 25 percent of their overall network security infrastructure or none at all, nearly all practitioners surveyed expressed pervasive interest across all the various features offered by next-generation systems. At the same time, survey respondents indicated widespread concern related to numerous challenges involved in NGFW management and migration, across nearly every involved process, from optimizing rule sets and correctly enforcing access controls, to minimizing the impact on operations, respectively.

Firewalls will play a significant role in emerging cloud, SDN and DevOps paradigms, which are viewed as major shifts in overarching matters of network evolution.

Survey results firmly establish that cloud computing, SDN and DevOps are widely viewed as key technology platforms for adoption, but not without support from firewalls. While a majority 59 percent of respondents indicated that cloud computing, SDN and DevOps represent fundamental shifts in networking evolution, the survey also finds that 43 percent of practitioners believe existing concepts of access control will remain a critical element of related best practices. Further, 58 percent of respondents specifically reinforced that both traditional firewalls and NGFWs will play a significant role in securing cloud environments. Related to adoption of DevOps, many more survey respondents indicated that current network firewall infrastructure does not stand to inhibit related efforts than those who indicated a belief that it does.

If these figures and the related trends pique your interest, you’ll certainly appreciate the depth of detail offered by the full State of the Firewall 2014 Report.

Tune into tomorrow’s webcast (live or after the fact) and register to get your copy, today.

Join The Conversation


We encourage you to share your thoughts, and we look forward to reading your comments. We invite you to subscribe to our blog to keep up with the latest posts of our new series.

Matt Hines

About Matt Hines

Matt Hines leads product marketing efforts at FireMon. Prior to joining FireMon, Hines held similar roles at TaaSERA, RedSeal Networks and Core Security Technologies, and worked for over a decade as a journalist covering the IT security space for publishers including IDG, Ziff-Davis, CNET and Dow Jones & Co.

Firewall Futures: Inspecting the Challenge and Opportunities




A blog series on the “Future of the Firewall”; that’s optimistic, as it implies that the firewall has a future.

For the record, I think that it does, I just hope we use firewalls more wisely in the future. I see both challenges and opportunities for the present and the future of the firewall; and, as is often the case in life, the challenges and opportunities are two sides of a single coin.

Modern firewalls have become much more than packet filters, and are much more powerful – if used correctly. The great advantages in versatility of NGFW, UTM, or whatever you use, do carry a burden of complexity.

keep-firewall-and-adapt-wisely

A common challenge remains proper configuration; this is a challenge we have faced for years, and I do not see it disappearing any time soon. Not that early firewalls were exactly “user-friendly”, but with limited feature sets came a smaller range of things to get wrong.

I think that, in general, modern firewalls are easier to deploy and configure properly, but added features do add complexity. The race to add features and functionality to firewalls (or any technology) is also a race with usability and user experience, a race we don’t always win.

IPv6 presents a related threat to the effectiveness of firewalls. I know I’m not alone in having seen firewalls misconfigured down to being very expensive NAT devices. As worrying as that is with IPv4, at least most organizations rely on RFC 1918 addresses internally and thus have some protection with NAT.

The growing numbers of IPv6 deployments threaten to expose millions of devices directly to the Internet as enormous blocks of publicly routable IPv6 addresses are assigned to internal devices.

Continue Reading →

Jack Daniel

About Jack Daniel

Jack Daniel, Strategist at vulnerability assessment specialists Tenable Network Security, is a well-known security expert, blogger and community advocate who co-hosts the Security Weekly podcast and co-founded the Security B-Sides conference series. As a network and security systems engineer with expertise in “practical information security”, enterprise security and integration of emerging technologies, Daniel is a self-described “InfoSec Curmudgeon” and “Reluctant CISSP”, who has also served as a longtime contributor to the National Information Security Group (NAISG). A winner of the Microsoft MVP for Enterprise Security, Daniel has worked previously at security appliance maker Astaro and as a real-world practitioner in the automotive industry. Jack remains a frequent speaker on technology, security, and compliance at conferences including Shmoocon, SOURCE Boston, DEFCON, RSA, and the many B-Sides events.

Advancing Firewall Evils to 10-Tuple




When I first started working with firewalls some 18-odd years ago, the revolution of “stateful inspection” was just starting to take hold. The explosion of Internet bandwidth (laughable now) to DS3-type speeds was driving everyone away from the proxy solutions they had in place to this awesome new security device.

All firewalling concepts were geared to the 5-tuple, situating the firewall firmly in the L4 space, but even then the market leaders defied that definition. Anyone that tried to pass active FTP without the properly CRLF formatting in the command channel was painfully aware of just how far up the stack the “L4 firewall” could go.

Proverbial "burning wall" image

Of course, back then you made a good living knowing how to turn those security features off (probably not selectively) so you could make the network work again. Now, we’re all trying to figure out how to program the network properly so we can exert control over the 10-tuple, which eliminates the need for stateful inspection, right?

The answer to the question requires some thought regarding basic concepts. I start with wondering: “Why does the network exist? What’s its purpose?” For me, the answer is that the network provides nothing in and of itself, it exists to supply services to users of those services. With that in mind, we can start by wondering just what it is the firewall does for us.

Some past thought patterns would be, the firewall:

• Stops users from consuming unauthorized services (SSH, for example) – which seems like something the service should do, right? If my network can manage flows, why can’t my service manage who consumes those services?

• Prevents bad actors from exploiting misconfigurations and vulnerabilities on the network and overlying services – but isn’t the network intelligent enough to protect itself and the services that ride on top of it?

Continue Reading →

Brandy Peterson

About Brandy Peterson

As CTO of services and solutions provider FishNet Security, Brandy Peterson has established an impressive career building and managing customer support offerings, with a long history of proven IT security leadership and expertise. With more than 12 years at FishNet, Peterson and his staff provide IT support to both the company’s employees and over 5,000 customers across the United States, with a specific focus on ensuring that FishNet is well-positioned to offer industry-leading capabilities via cutting-edge systems and technology. Prior to his role as CTO, Peterson served as systems engineer and then director of technology at FishNet.