Microsegmentation: Great idea, but how?

In the last few years, microsegmentation has become an increasing promise of network security for organizations in search of Zero Trust. As a framework, Zero Trust enforces secure connections down to the asset and/or application level – ensuring that security is tighter than ever and providing the means to shut down east-west traffic in the event of compromise.

What exactly is microsegmentation?

In simplest terms, microsegmentation works like this: typically, a network consists of thousands of different computers. We’ve long been able to segment computers, essentially drawing virtual perimeters around groups of computers to better define security parameters and protocols. Microsegmentation simply takes that process to the single computer level. A perimeter is drawn around each computing resource or application, meaning that thousands of different computers in a network now have their very own personal fortress protecting them.

This is undoubtedly a higher level of security and highly compatible with a Zero Trust framework. Previously, network security was perimeter-centric, hardware-centric and largely inflexible. Despite a generalized attitude that a larger suite of security products would, in fact, create more security, data breaches kept happening. It was time for a newer, more flexible and adaptive model, and that’s where microsegmentation was born. (This paper from SIGCOMM in 2011 is thought to be one of the earlier references to the idea, with a Gartner research note in 2017 taking the idea more into the mainstream.)

There were other concerns with the traditional approach to networking, with one of the foremost being how prescriptive it is. Computers within the network needed to be told exactly what to do, such as “block port X” or “enable 802.1q for Port P, etc. Most traditional network equipment makers offered their own user interface, which is often written as command-line interface, or CLI. That means if you have a team internally working with multiple vendors, they need to learn the CLI syntax for each vendor and apply that when making prescriptions. It becomes tedious. Mistakes happen everywhere.

Did we mention there are still breaches consistently as well?

But there’s a problem with microsegmentation

It may have jumped out at you already.

If you had your network organized into 500 firewalls with perimeters, each representing 100 computers, there were undoubtedly some challenges therein.

But if you’re looking for a new strategy and embrace microsegmentation, 500 firewalls become 50,000 firewalls when we get to the individual computer level.

If you had a lot of concerns at 500, it stands to reason you’d have significantly more at 50,000.

So if a best case scenario is greatly improved security, would the worst case scenario be 50,000 reasons to panic?

No one wants to add more stress and workload to their existence, which can be a major challenge for executives and decision-makers first hearing about microsegmentation. One of your goals in such a role is to protect your people from overwork so that they can focus on KPIs. Why add thousands of new firewalls?

Plus: you’re going to need an improved analytics program, in all likelihood, to recognize traffic patterns and key relationships between computers in the network. This will form the basis of rules and protocols going forward. It’s unsustainable to do that manually with multiple thousands of firewalls.

The equation you’re weighing here, then, is:

Improved security vs. increased workload and time away from priorities

How do you make that call?

How can this be overcome?

It actually can be. The key is a focus on “intent,” putting core business needs above all. Intent-based networking also helps with the prescriptive problem described above. We’ll go more in-depth on the next post.