Network Operation: Developing a Smarter Security Framework




As any network operator can attest, the words “firewall” and “security appliance” carry multiple connotations; some of which are flattering and others that are… not.

That being said, developing scalable and feature driven security devices is a difficult task, especially while trying to provide the best performance at the most competitive price.

FoF.network.11.14

Over the past few years, the number of enterprises that have migrated to hybrid datacenters and cloud architecture has increased dramatically, exacerbating underlying issues such as throughput, redundancy and administration.

As a result, today’s enterprise architectures are far more distributed than ever before – most often a conglomeration of multiple vendors, code versions and management methods.

Imagine being an operator responsible for multiple datacenter network security systems and having to integrate your security management methodology into a cloud environment.

This remains a daunting challenge not only due to many organizations’ inability to find critical staff or the sheer difficulty of centrally managing systems seamlessly, but also in achieving a high level of faith that everything will operate in the same manner after a code upgrade or activation of a new feature.

Since the rest of the networking space has already adopted horizontal scaling for hardware and software, why aren’t we following the same methodology for security? Security appliances are not carrier grade routers, nor should they be treated as such. Yet, the sheer number of features that enterprises require from their security systems often comes at the sacrifice of throughput, creating subsequent traffic flow issues across the network.

As a result, firewalls and other security appliances must evolve to operate as a piece of software on commodity hardware or a virtual machine to both scale horizontally and empower all the necessary features, regardless of their deployment location. A common, easily tunable API abstraction management layer will also be critical in reducing operational overhead for network engineers.

By adopting this mindset, security systems will provide a much higher level of accuracy for threat detection and mitigation, along with administration of rule sets, resiliency and throughput – all while reducing operational and capital expenditures. The rest of the network must communicate and share critical information, especially as we progress more and more into Software Defined Networking (SDN) and Networks Function Virtualization (NFV).

Yet, network security systems continue to operate as islands today.

To change this we must truly embrace the mindset that security is just another key service that operates within the chain that is the network. Only then can we can move forward in developing a more protective, unified, vendor neutral and architecturally agnostic framework.

Join The Conversation


We encourage you to share your thoughts, and we look forward to reading your comments. We invite you to subscribe to our blog to keep up with the latest posts of our new series.

David Mitchell

About David Mitchell

David Mitchell is Founder & CEO of Singularity Networks and serves as advisor to Dtex Systems and Versa Networks. Mitchell is also a former advisor to network security specialists Arbor Networks and FireEye. Prior to launching Singularity in 2014, David served as Lead Network Security Engineer at Twitter and held similar roles at Netflix, Yahoo and Time Warner Telecom. He also remains active in the operational security community, including participation in effort such as the Internet Engineering Task Force (IETF).

Future Considerations: Software Defined




If Software Defined Networking (SDN) becomes the open ubiquitous technology that I think it will, everything changes.

That sounds dramatic, but I believe that SDN will change many aspects of how we deploy and manage networks. It also creates a completely new paradigm for security enforcement and an opportunity to think differently.

I think it will be amazing for people, for the industry, and for everything we try to do in security. It will power an Internet of Things (IoT) and forever elevate the value of data anytime, anywhere. I see SDN as the next critical step that no one will ever know happened.

SDN

When is this amazing change supposed to happen, you ask? It’s already started and it will be ongoing for many years to come. It’s not something where you can just flick a switch and suddenly it’s all there and running; there’s still lots of work to do.

But we can flick the switch ahead of time when thinking about how to build SDN strategy, and ultimately a secure one. To do this, you have to drop all current expectations of the technologies that you’re running today and think about what SDN is meant to change at all levels.

To get in the right state of mind for this exercise, consider a situation where you’ve been running a library for many years. It’s stacked full of books, magnificent collections for anyone to access and read via a book tracking system that you’ve spent millions on, essentially putting the Dewey Decimal System online.

Then tragedy strikes one night and the entire collection, along with the building, burns. The insurance money comes in and we are left with a real question. Does it make any sense to rebuild a building full of books, knowing what we already know about technology? Is there still a place for this? Before, due to a long history of value, this option was assumed, but when presented with, or in fact forced to recreate the library, does the design and deployment of a building of books make any sense?

I ask this question because you have to go into SDN with just that frame of mind. Ask yourself if what you’re doing today makes any sense in this new design, then go a step further. Ask yourself what you need to do to empower SDN instead of looking at it from the perspective of how it might work based on how you do things today.

What It Takes

Let’s flick that switch now and consider how SDN is evolving the network by walking through an SDN-enabled infrastructure from network to application.

SDN extracts network intelligence directly from switches into a centralized controller. This controller contains all the objects in the environment, from switches to applications, and everything between. The controller can send commands like “put, get, forward, delete, etc.”, as well as take in data about the state of any forwarding tables (and that’s without getting into the technical details, which is another blog unto itself).

Consider a network where you can make forwarding decisions based on far more than IP data. I’m talking about simply knowing where the connection needs to be and forwarding it across any infrastructure to any application, against any security controls that you may need. Maybe you rewrite the IP header as it moves across physical connections, but that’s not even necessary to consider when working with SDN as the process is abstracted away from us.

Think about what you could do with the power to forward packets based on a myriad of possible scenarios from network to application, and being able to track and protect that flow on demand. Running out of CPU and memory in one datacenter? Send the flow over to another. That one getting tapped out? Push it out into a cloud infrastructure.

New version going online of your application? No problem, as the next flow will be directed to the virtual machine running the new code. Problem with the new code? OK then, the next flow goes back to the previous version service, all on demand and orchestrated. I can’t wait to see the creative things that people do with this level of program-ability and control.

The Security Perspective

How is security affected by all of this?

For starters, it’s simply abstracted to a service with policy eventually moving into orchestration of that service. Don’t get me wrong, security policy management remains relevant, but it moves from a dictated security policy to a monitored security policy, just not right away. And over time, traditional enterprise security policies will become less relevant. To show you what I mean, we can jump ahead to the concept of a monitored policy as part of this exercise.

Let’s say that an application request comes in the form of a network call to a Web service to return data for a custom application, perhaps a new wearable armband health application. The network then checks its table to see where to send the connection, tags it accordingly, and forwards it on. In turn, the controller knows an application request is on its way for this particular service, and most likely already has a server up and running, ready to service it.

Since the controller knows how many clients it can service per virtual machine, with defined CPU and memory, it keeps spinning up new virtual machines and redirecting traffic accordingly. To include security in this process becomes a simple task. There’s no need to deploy hardware and create choke points as security simply becomes another application to the abstracted network.

For example, we can forward the data based on any decision, not just the network setup, and offload a copy for traffic validation; essentially run an on-demand security scan on the same flow and let the controller know if there’s a problem. Based on the orchestration decisions, the controller can have the traffic flow quarantined, blocked, redirected or just plain dropped; how and why will be tied to the value and risk of the service.

This is the point where we move from security policy management to security policy monitoring. As applications are defined and brought online, information will be collected on what data is handled by which users and corresponding threat scanning can scale up or down accordingly. It will be this on-demand delivery of security services that will enable rapid scaling of new applications.

While excited about all these the possibilities, I’m fearful of the potential nose dive that could occur if vendors try to create some form of lock in. SDN as a technology can’t be stopped by this and will emerge no matter what, it’s just a matter of how long it takes. Being realistic, it’s just going to take a few generations of equipment to get there.

However, if we truly enable SDN from networks, along with security, and into the application, many of our current challenges go away. Not to say we won’t have new issues to consider, but I’ll save that discussion for another time.

Join The Conversation


We encourage you to share your thoughts, and we look forward to reading your comments. We invite you to subscribe to our blog to keep up with the latest posts of our new series.

Kellman Meghu

About Kellman Meghu

Kellman Meghu is Head of Security Engineering (Canada and Central US) for Check Point Software Technologies Inc. His background includes almost 20 years of experience deploying application protection and network-based security. Prior to joining Check Point, Mr. Meghu has held various network, VoIP and security engineering roles with European telecommunications giant Alcatel, Electronic Data Systems (EDS), and as a private consultant.

Advancing Firewall Evils to 10-Tuple




When I first started working with firewalls some 18-odd years ago, the revolution of “stateful inspection” was just starting to take hold. The explosion of Internet bandwidth (laughable now) to DS3-type speeds was driving everyone away from the proxy solutions they had in place to this awesome new security device.

All firewalling concepts were geared to the 5-tuple, situating the firewall firmly in the L4 space, but even then the market leaders defied that definition. Anyone that tried to pass active FTP without the properly CRLF formatting in the command channel was painfully aware of just how far up the stack the “L4 firewall” could go.

Proverbial "burning wall" image

Of course, back then you made a good living knowing how to turn those security features off (probably not selectively) so you could make the network work again. Now, we’re all trying to figure out how to program the network properly so we can exert control over the 10-tuple, which eliminates the need for stateful inspection, right?

The answer to the question requires some thought regarding basic concepts. I start with wondering: “Why does the network exist? What’s its purpose?” For me, the answer is that the network provides nothing in and of itself, it exists to supply services to users of those services. With that in mind, we can start by wondering just what it is the firewall does for us.

Some past thought patterns would be, the firewall:

• Stops users from consuming unauthorized services (SSH, for example) – which seems like something the service should do, right? If my network can manage flows, why can’t my service manage who consumes those services?

• Prevents bad actors from exploiting misconfigurations and vulnerabilities on the network and overlying services – but isn’t the network intelligent enough to protect itself and the services that ride on top of it?

Continue Reading →

Brandy Peterson

About Brandy Peterson

As CTO of services and solutions provider FishNet Security, Brandy Peterson has established an impressive career building and managing customer support offerings, with a long history of proven IT security leadership and expertise. With more than 12 years at FishNet, Peterson and his staff provide IT support to both the company’s employees and over 5,000 customers across the United States, with a specific focus on ensuring that FishNet is well-positioned to offer industry-leading capabilities via cutting-edge systems and technology. Prior to his role as CTO, Peterson served as systems engineer and then director of technology at FishNet.

Natural Selection: The Future of the Firewall




When Jody Brazil and the folks at Firemon asked me if I’d write a post for this ”Future of the Firewall” series my first thought was, “if I had a nickel for every time someone told me the firewall was dead, I ‘d be rich.”

Yes, the good old firewall, the security technology everyone loves to hate, has been on supposed life support for years. But yet it’s a $9 billion market according to Gartner. We should all be that sick.

To be fair, today’s next generation devices bear little resemblance to those old Check Point boxes you may remember. It’s sort of like comparing a Model T Ford to a Tesla.

However, just as both cars can get you from A-B, today’s firewalls are doing the same things those old Check Point or Cisco Pix boxes did. While the speed, bandwidth, scalability and capability has increased, firewalls do the same thing now they did then, controlling ingress and egress.

Going into the future, firewalls will still perform this task.

I don’t want to leave the impression that nothing has or will change, though. Firewalls have evolved and collectively these changes have drastically shifted the model. For me, the biggest change is where the firewall lives; it’s no longer merely the drawbridge over the perimeter moat providing entrance to the castle.

Shrinking Dinosaurs

A better analogy for how firewalls have changed might be found in comparing dinosaurs to birds. Just as the dinosaurs evolved into birds and took fight, firewalls have transformed. Initially they flew inside. One significant innovation was use of firewalls deployed inside the network to isolate segments, with highly sensitive data kept behind these internal systems.

Other firewalls evolved into big honking boxes sitting at the core of the network. Instead of perimeter devices, these firewalls performed ingress and egress monitoring/control at a critical choke point for all network traffic.

And just as some firewalls flew inside, other firewalls flew away altogether. Some flew to the cloud, where the servers were going, to protect the web servers and applications that serve as the interface for computing interactions.

Continue Reading →

Industry News – Advancing Network Threat Intelligence




When FireMon re-positioned itself around the concept of Proactive Security Intelligence at the beginning of 2014, the effort was undertaken with the notion of highlighting the critical role that data produced by our solutions plays in managing enterprise security and IT risk.

Sure, if you want to start at the most foundational element of the processes we support, as many of our customers do, it can be stated as simply as firewall management – getting a clear understanding of what network security device infrastructure is doing, then improving the performance and efficiency of those defenses, continuously.

cyber.threat.alliance

However, the truth is, “firewall management” is a far too narrow a manner of communicating the overall value of what the FireMon Security Manager Platform and its supporting modules offer in terms of strategic information, thus the new messaging.

With all the intelligence that we produce regarding policy workflow, compliance validation and risk management, along with enablement of related process automation, we felt it was far more appropriate, if not completely defensible, to adopt this broader PSI mantra.

Intelligence, of course, has evolved into a very broad and encompassing industry buzzword, popular among security vendors of all breeds who feel that they provide some form of critical data to inform strategic decision making – which admittedly could be almost any company on the landscape today.

Continue Reading →

Matt Hines

About Matt Hines

Matt Hines leads product marketing efforts at FireMon. Prior to joining FireMon, Hines held similar roles at TaaSERA, RedSeal Networks and Core Security Technologies, and worked for over a decade as a journalist covering the IT security space for publishers including IDG, Ziff-Davis, CNET and Dow Jones & Co.

Black Hat 2014: RSA in the Desert?




I’ve been attending the Black Hat Security Conference in Las Vegas for almost a solid decade now, and if there’s one thing that’s for sure, it’s that the conference continues to evolve.

Given, when I first started attending Black Hat those many years ago, it was not as a marketing rep for a security software vendor, but as a reporter attempting to get my head around the emerging threat/exploit landscape.

black.hat.2014

However, even if my time is no longer spent attending sessions, and trying (with varying degrees of success) to understand what is being presented, a walk across this year’s show floor clearly evidences the continued shift towards a more business-centric audience.

This is nothing new, of course, as hardcore Black Hat attendees have been decrying the show’s evolution into more of an “RSA in the desert” for years. However, it’s clear that with each passing summer this change becomes ever more the reality.

When I was working for pen testing specialists Core Security in 2008, it was clear that ethical hackers, primarily researchers, still made up a huge swath of the Black Hat audience; this no longer would appear to be the case.

Certainly it has a lot to do with spending more time in the vendor exhibition space, but with each year I see more corporations and government agencies listed on attendees’ badges, and fewer humorous attempts to dodge identification (though we do have several “ninjas” and at least one “director of rainbows and unicorns” listed among our 2014 badge scans).

As I was discussing this phenomenon with longtime industry guru Alan Shimel (currently of the CISO Group and Security Bloggers Network) we were debating the potential upsides and downsides.

First off, neither of us would debate that there’s still a wealth of extremely valuable research on the Black Hat schedule, and I can’t even make the claim in recent years of attending many of these sessions.

Another key component to consider is that there are the sister DEF CON and parallel B-Sides Las Vegas shows, which cater directly and almost exclusively to ethical hackers and focusing almost solely on research, allowing Black Hat to grow more… corporate.

You also have the phenomenon of people who started out as Black Hat researchers who are now focused more on the business side of things, having built vital companies out of the expertise they used to share as conference presenters (the guys from White Hat Security are a fitting and high-profile example).

As noted above, one of the other significant changes in Black Hat attendance is the ever-increasing number of government attendees. In years past there may have been a lot of Red Team/Blue Team types – and likely still are – but today, there’s an overwhelming number of state and federal security officials in attendance – with their names and titles displayed openly on their badges (another notable shift).

My impression is that many of the people who first came to Black Hat – and now may spend more time at Def Con or B-Sides – may disparage the show’s change in interests, arguing that the event is now too focused on the business side.

However, for companies like FireMon this shift has obviously made the event even more valuable, providing us with another fantastic opportunity to connect with existing customers and new prospects to tell them more about what our solutions can do.

Is the change good? Is it bad? That’s for each individual to decide on their own, but as Alan and I eventually agreed, it’s really just a natural evolution as hacking and ethical research continue to mature and become an even bigger element of enterprise security.

No matter how you slice it, Black Hat continues to serve as an ideal venue for numerous elements of the security community to connect. No matter what changes come it’s always a pleasure to be there.

Matt Hines

About Matt Hines

Matt Hines leads product marketing efforts at FireMon. Prior to joining FireMon, Hines held similar roles at TaaSERA, RedSeal Networks and Core Security Technologies, and worked for over a decade as a journalist covering the IT security space for publishers including IDG, Ziff-Davis, CNET and Dow Jones & Co.