A Zero Trust Approach to Decoupling Intent from Implementation

On-Demand

Video Transcription

Lindsay Brechler:
Hi, everyone. Welcome today. We’re going to give everyone just a couple of minutes to join, and then we’ll get started here just shortly. (silence)

Lindsay Brechler:
Hi, everyone. Welcome. We’ll give everyone just another minute or so to get settled, and we’ll get started here. (silence)

Lindsay Brechler:
Hi, everyone. Welcome to today’s webinar. My name is Lindsay Brechler and I’ll be your moderator. I’m a product manager here at FireMon, and I’m excited to be hosting this session today. We have two great speakers, Matt Dean from FireMon and Dr. Chase Cunningham, principal analyst at Forrester. Dr. Cunningham’s research at Forrester guides client initiatives related to SOC planning and optimization, counter threat operations, encryption, network security, and the reason you’re all here today, zero trust concepts and implementation. Dr. Cunningham is a retired US Navy chief with experience in cyber forensics and cyber analytic operations and operations experience from work centers within the NSA, CIA, FBI, and other government agencies.

Lindsay Brechler:
Matt Dean, FireMon vice president and general manager of Global Policy Controller. He’s also held a number of previous roles at FireMon, including VP of product strategy and chief operating officer. He’s a frequent speaker on industry best practices, firewall management, and compliance. Prior to joining FireMon, Matt served as a communications officer in the US Air Force with an emphasis on operational network management and is also a graduate of the US Air Force Academy.

Lindsay Brechler:
Today’s webinar is a continuation of discussions between Matt and Chase, which began earlier this year when FireMon was named as the only network security policy management vendor enforced to zero trust extended ecosystem. Before I hand the mic over to Chase, I have a few housekeeping items about the presentation and the BrightTALK webinar platform. First, today’s webinar will be available on demand after the live session, through the same link you’re using now.

Lindsay Brechler:
We’ve also added some attachments which are available through the attachments tab at the bottom of your screen. You can find today’s deck and our zero trust e-book there. We’ll leave time for questions at the end of the session. So if you have one for our speakers, please feel free to submit it through the ask a question tab at the bottom of your player. If we don’t get to your question during today’s webinar, we’ll be sure to follow up afterwards.

Lindsay Brechler:
Last, we’d like to encourage you to share today’s webinar with your social networks. You can retweet any of the live tweets you see from FireMon during the presentation, or you can use the social sharing icons at the top-right post directly to Twitter, Facebook, or LinkedIn. Thank you again for attending today’s webinar, The Future of Network Security, A Zero Trust Approach to Decoupling Intent from Implementation. Without any further ado, I’ll hand it off to Forrester analyst, Dr. Chase Cunningham.

Dr. Chase Cunningham:
Great, thanks for having me today. This is a really interesting topic. For any of my fellow military people that are out there listening, don’t hold it against me that I’m doing a webinar with an Air Force guy. Navy folks, we got to let the Air Force play in our area every once in a while. So I won’t hold it against Matt. But really, what we’re going to talk about is sort of the ideas and pieces around some of the zero trust concepts that have been rolling out there and get a little bit more of a conceptual understanding of why the capability of being able to really segment control, isolate, and do things at speed and at scale is so necessary for organizations that are trying to figure out how to do zero trust.

Dr. Chase Cunningham:
I mean, everyone should pretty much be aware that it’s just dang near impossible to do good management and control without automation and orchestration playing a piece in that strategy. So it’s absolutely critical that everyone starts to understand the sort of why you’re doing it and how you’re doing it and be able to do it very, very fast across the entirety of the infrastructure. So I like to start speeches and webinars out with things that aren’t exactly directly related to the topic area just because I think it adds a little bit of a higher level conversation point.

Dr. Chase Cunningham:
I think about networks, and I think about infrastructure almost in the context of kind of where I came from, which is being in the Navy was the maritime area. When I think about functional segmentation, which is what you’re trying to get to in zero trust and trying to get to in any sort of firewall management or control, really you’re trying to get away from the Titanic model. What I mean by that is the Titanic was built as the unsinkable ship, and they had a pretty good layer of segmentation down below the waterline. They were prepared for if something went wrong, where they kind of thought a flood might occur or where water might come in. But they stopped where the water stopped essentially. Nothing wrong with that at the time. But the problem was they didn’t actually segment around the entirety of where the threat or the avenue of bad day could start, which was essentially anywhere on that ship.

Dr. Chase Cunningham:
So once the water came in, and it started to flood through the hole that the iceberg made, it got up really, really quickly and proliferated right over the top of all of their segmentations, and everyone knows the tragedy that occurred with Titanic. They didn’t do a very good job of having functional segmentation and isolation, so that ship, and it cost people their lives.

Dr. Chase Cunningham:
Contrast that with how we do things in the Navy. This is a picture of the USS Stark. I can tell you from personal experience that when you do segmentation, which we call watertight integrity or during general quarters, we call it setting condition zebra, means that you are locking down everything on that ship, every drain, every outlet, every door, everything that you could possibly have mandate control of. You’re doing it so that you can lock it down and make it where if something does go wrong, because it inevitably will, you can survive, and the ship will stay afloat.

Dr. Chase Cunningham:
If you think about how critical is segmentation for survival and infrastructure in a network or on a US Navy vessel, the reality of it is if you have good segmentation, you will survive. If you look at the USS Cole, look at the Stark here, Samuel B Roberts, there’s a few other ships that have been through these major sites of crises, and because of the fact that things were isolated and segmented and controlled during those events, they actually were able to keep that ship in service. I believe that Stark was able to go back into service for another 21 years when they recovered from this.

Dr. Chase Cunningham:
The other thing that you kind of have to remember when you’re doing this type of segmentation for the purposes of survival is you’ve got to be able to do it at speed and at scale. When things go wrong, you’ve got to know why you’re doing it so that you can survive. You got to know how you’re doing it. You got to lock things down and you’ve got to be able to close doors, and you got to know where you’re going to do it and what the importance of it is.

Dr. Chase Cunningham:
So it’s a little bit different way of looking at the problem set, but everybody spends all of our time sort of focusing on networks and infrastructure and virtual things and firewalls and et cetera, and that’s super important. But the same things apply actually when you look at survival in the context of shipboard license in maritime.

Dr. Chase Cunningham:
So when we’re getting into zero trust and moving into a little bit more of why segmentation, why control, why automation, why understanding what you’re doing is so important. If you focus on the sort of newer model of zero trust, you can see that the automation orchestration piece runs through the entirety of the model. It touches the workloads. It touches the network. It touches devices. It touches the people piece, and its surrounding the data.

Dr. Chase Cunningham:
So the point there being zero trust is evolved a bit from the old days of focusing solely on network and trusting nothing at the network. That’s still very critical. But where we are now is that this focus around being able to automate and orchestrate and do things at speed and at scale across the entirety of the model, across the entirety of the framework is as critical as any other piece of the framework itself. To do zero trust, organizations nowadays are diving in on this particular framework, and they’re saying, “Okay. We focus on the outcome. We design from the inside-out. We go from micro to macro.”

Dr. Chase Cunningham:
We’ve done a study recently that actually validated that the perimeter based model just isn’t seen as being effective anymore, and it’s not just Forrester saying it. We had hundreds of C-level executives and security leaders that came back and said, “Yes, the perimeter based model doesn’t exist. You’ve got to focus on the internals and work your way outbound.” You got to start with the assets and data that needs protection, because the reality of what you’re trying to do in the security space, and it’s in the context of zero trust means that you have to understand what you’re doing and why you’re doing it, and the reason for all of this security stuff that we do is because you’re protecting data. No one breaks into a bank to say they’re on a building. They break into a bank to take money. Hackers and bad guys don’t break into networks just to say that they’re in there. They’re in there to do something, and usually that means they’re taking data. So you’ve got to start with asset data that needs protection.

Dr. Chase Cunningham:
You got to determine who and what needs access and why. You’ve got to have some textual understanding and sort of insight into who’s doing things in your network, why they’re trying to do it, and you’ve got to be able to do that across the entire of the infrastructure, and you’ve got to do that as seamlessly as possible. If you can’t do that, you’re missing a very, very key piece of what you need to do this right. Or you’ve got it enforced, least privileged, and you’ve got to be able to isolate and lock down anything that looks like it could be a threat, and you’ve got to practice that methodology of never trust, always verify.

Dr. Chase Cunningham:
Finally, you’ve got to be able to inspect and log all of the traffic. The interesting thing there is if you actually segment things correctly and you have the command and control and good automation orchestration across that infrastructure, you’ll get better insights into what’s going on because you’re not looking at all of the data all the time, across everything, as it flows nonstop. You’re actually able to work off of segmentation and know what’s going on so you can be a little bit more focused in your analytics, which makes your automation better, which makes your security better, which makes the outcomes better.

Dr. Chase Cunningham:
So all of this plays off of the other pieces of the framework, and the tenants themselves haven’t necessarily changed that much. They’ve just been modified and made a little bit broader instilled.

Dr. Chase Cunningham:
So just some information from research, when you actually start looking at the firewall side of this equation and going back to that model, this is around the network and automation orchestration piece, lots of organizations have got more than one type of firewall for more than one type of vendor. I know from my days doing network design and architecture and things like that, you’re lucky if you don’t have more than two or three. Usually, you have three or four or five, and now that we have virtual firewalls and the capabilities there, it just becomes really, really big, really, really fast.

Dr. Chase Cunningham:
Most enterprises that have that type of issue, they wind up using a third-party firewall management tool, and they get a lot of benefit and value out of it. The point of that note there is that if you’re trying to do this manually with the firewall configurations and capabilities that are offered by the vendor that’s sort of already there, they’re already there for a reason, and they typically have been doing the same thing for a long time for a reason as well as it’s hard to do automation and orchestration at scale with regular firewall type of controls.

Dr. Chase Cunningham:
There are networks where the firewalls have been up and running for 20-plus years in some instances, and people are afraid to optimize and use those firewalls better because things will fall over. Next-generation firewall is a great capability. But it’s got to be something that’s used optimally. It can’t just be, we have a next-generation firewall. We got to make the rules on it. Yeah, we think we got it right. Odds are if you actually run through and do good analysis and understand what’s going on with those firewall configurations, you’ll see really quickly that they’re not running optimally, and they’re not doing what you’re paying for. So you’re actually spending budget on next-generation firewall that you probably aren’t getting the value out of because it’s not optimized and because it’s not doing full-roll orchestration across the infrastructure.

Dr. Chase Cunningham:
So just a little bit of mathematics here, and maybe I’m upping the nerd factor a bit too high, but what can I say? I like mathematics. Firewall math. If you look at the average number of firewall vendors for a small network, it’s roughly three. If you think about what we’re doing in the space right now, as far as applications, which is kind of running every business everywhere all the time, most apps require sort of three tiers to operate. Each tier of those apps requires about 15 rules, and then each organization of sort of small size at about 250 applications that they’re running at any given time.

Dr. Chase Cunningham:
So if you do that math, and this is not perfect map, this is just kind of a little bit of analysis and some shot gunning numbers around, I mean, that’s roughly, for a small organization, you’re looking at about 35,000 firewall rules. If you think about who you have that’s running this network for you in most organizations, it’s maybe one, two, or three people. How hard is it for those people do optimal network configuration across all that infrastructure, legacy hybrid, et cetera, et cetera, and do it right when you have 35,000 rules to figure out, and some of those rules may have been in there forever. It just gets really, really big, really, really fast. You can’t do this without optimizing it and understanding the why you’re doing it so that you know how to do it better and apply technology to solve the problem.

Dr. Chase Cunningham:
Just some other sort of calculations around this, because I think that there’s veracity in mathematics, and it’s easier to see things when you really look at the data around it. This is from some other folks that did the research, and I’m just really interested in what they did and them sort of compiling it for these purposes. But if you look at the number of applications in a data center, then calculate the average number of ACL’s firewall rules that you’ve got the number of applications, divide that by the number of errors that your team typically get, just being gentle there, let’s say most people will have about five errors per every X number of configurations, and then you multiply all that out and run the formula, you can actually figure out sort of mathematically the likelihood of fail.

Dr. Chase Cunningham:
So if you had a small company that’s got 500 apps, roughly seven rule for app in this instance, 20% of those asks are being updated with an error rate of 2% to 5%, the total likely exposure threat sort of area of configuration would be about 72 errors. That’s pretty small. 72 is probably manageable. But when you start thinking about, as it gets bigger and grows, and you’re doing this for more firewalls across more infrastructure and going from hybrid to old to new to legacy to whatever, the numbers get really, really big, really, really fast.

Dr. Chase Cunningham:
I know that I’ve seen organizations that are running firewall sort of analysis and optimization programs, where I’ve seen hundreds of thousands of rules. So if you’re one individual, and you think about this sort of error rate calculation, hundreds of thousands of rules, and you’re running 2% to 5% of errors for hundreds of thousands, the numbers get really, really big, really, really fast.

Dr. Chase Cunningham:
So getting away from all the mathematics and the Navy stuff and the zero trust thing is, what does it mean? So what it means section at force, or I mean, the truth of the matter is you have to understand in the sort of environment we live in today with the way that we do business with the type of applications we run, with the proliferation of cloud, with the growth of virtualization, and with the speed that simply everything moves, you can’t keep up. Your organization can’t keep up doing it manually. You’re not able to do this at speed and scale when you’re trying to dig ditches with a shovel.

Dr. Chase Cunningham:
You’ve got to be able to do this with a Jack hammer and move as fast as possible across all that. You got to get it right. Manual approaches will fail. I mean, even if you just totally disregard some of the rough calculations there and the mathematics that give you the actual numbers, if you’ve ever tried to manually run configurations for these type of assets across those types of infrastructures, you’ll see really, really quickly that it is impossible to do this manually. It just doesn’t work at all.

Dr. Chase Cunningham:
The math is not in your favor. So it’s not one of those things where you can say, “Oh, I think I can get this right if I just keep trying and keep banging my head against the wall.” No. I mean, the numbers are there. There’s research out there for lots of different organizations that back this up that say that statistically, numerically, you can’t do this type of configuration at that level and get it right.

Dr. Chase Cunningham:
Compliance is not going to fix it. There are requirements in different compliance initiatives that are out there that say you have to have things configured a certain way. Great. You have to do that for business purposes. You have to do that for data protection and all that stuff makes sense if there is a necessity. But don’t rely on compliance being the best way to do things. Don’t rely on compliance being the guiding principle of how you build your organization.

Dr. Chase Cunningham:
I was a red-teamer for quite a long time, and I can tell you that compliance does not save you from compromise. Finally, you’ve got to change your strategy, change your way of thinking to start turning the tide in your favor. I deal with a lot of organizations all the time, and a common sort of complaint that you get is that we are still chasing our tails trying to solve this issue.

Dr. Chase Cunningham:
Well, most of the time that they would back up and say, “Okay, strategically, why are we doing this? What does it mean? Can we move past and just doing things and understand the intent behind it?” They would get a much better result. That’s where zero trust has seemed to take off with people getting behind the strategic side of this initiative and say there’s frameworks out there. This is sort of a movement. It makes sense, and we can align to the tenets of zero trust. Then we’ll start working for strategically to move everything in lock step with that particular strategy. All of a sudden, when everyone’s marching to the same beat, towards the same objective, speaking the same lingo, things get better.

Dr. Chase Cunningham:
So just a couple of other pieces here around why zero trust seems to be so helpful. These are actually not new. The funny thing is we update this research pretty regularly. When you look at the benefits of zero trust, they haven’t really changed that much in quite a while, improving network visibility, breach techs and vulnerability management, stopping malware propagation, reducing capital on OPEX expense, reducing the scope of compliance initiatives, eliminating intersilo finger-pointing, doing better with data awareness and insight, stopping exfiltration of data for malicious actors and enabling digital business transformation.

Dr. Chase Cunningham:
The way that you do this may have changed a little bit. So the speed with which you try and approach zero trust may have changed a bit, and the understanding of the value that you get out of the zero trust approach may have gotten a little bit more clear in the recent past. But the benefits of it speak for themselves, that there’s a reason why there’s an industry shift around zero trust and organizations are starting to jump in on it is because it makes sense, and it’s clear to understand the benefits and the fact that these benefits haven’t changed in quite a while means that they are solid and that you actually are able to get those benefits it’s not something that’s shifting with the sands.

Dr. Chase Cunningham:
Just a couple of pieces here on the phases of zero trust segmentation and understanding kind of the how you do this and the why you do this. The classification piece analysis, design implementation, and then monitoring. The classification, you’re making sure you know what the data is, where it lives and what it does, the value of it, and that’s where you’re going to build your security controls around, and that’s where you’re going to try and automate and orchestrate so that you can do this better.

Dr. Chase Cunningham:
Analysis. You need to know what the traffic is, where it’s going and the value of that information and be able to get insight from it. You got to design your system and your infrastructure around the data so that you’re actually putting controls in that matter, and then you’re going to implement it and get things rolling out. Usually, you’re going to have to have some sort of capability that’s offered to you by a vendor with really significant technology that does speed and scale, because nowadays, you have to do things across big assets, big infrastructure.

Dr. Chase Cunningham:
Finally, you’ve got to be able to monitor. If you’re doing this right, you’re going to be able to monitor things better. You’re going to see things better, and you’re going to be able to know what’s going on. All of this should give you insight and allow you to act quickly so that you can actually do something about things going on in the network.

Dr. Chase Cunningham:
These are actually a little bit older designs on zero trust network diagrams. But the point here actually is if you think about what you’re doing with firewalls, with configurations, with isolation, with segmentation, with data security, I mean, this model still stands. Basically, you’re just isolating things out into functional components, and you’re doing it with as much control as you can possibly get by using assets and technologies that enable you to do this for the entire infrastructure.

Dr. Chase Cunningham:
Next-generation firewall, older firewalls, ideas, content filtering, activity monitoring, crypto, and access management, all of that enabled all of that tied together, all of that giving you insight and all of that is allowing you to have better command and control the infrastructure and better command and control and insights of the data that you’re trying to manage.

Dr. Chase Cunningham:
Ultimately, you’re going to have to have a management capability in there somewhere. Usually, this is going to be some sort of junk box or some sort of zone, but point being, you’re doing this with a singular point of management, singular point of control, and this is why it’s required and incumbent upon those organizations that are getting involved in this to understand you use technology that allows you to do this management across diverse infrastructure with big-time proliferation. If you can’t do it at speed at scale across thousands, if not hundreds of thousands of assets, you’re not actually getting the value of the technology that you’re putting in place.

Dr. Chase Cunningham:
Finally, just a little bit more of the micro core and perimeter, nothing really earth-shattering on that one particularly. But what I hope you get out of this is that segmentation is absolutely key to zero trust. In order to do segmentation, you’ve got to have capabilities and technologies that work to understand the why of what you’re doing and enable your organization to deploy better rules that optimize capabilities across diverse infrastructure, across legacy, across hybrid, and have solid demand to control solid capabilities, to know how to fix the issues, and you should be able to get actionable insights and do things within that infrastructure based on the technology and the solutions that you’re using that align with your zero trust strategy.

Matt Dean:
Hey, Chase. Thanks very much. Really appreciate your thoughts. This is Matt Dean from FireMon, and I want to continue the conversation really around kind of what we’re doing at FireMon really following on to what Chase is doing around zero trust and his thoughts there. It’s something as well that we’ve been focused on at our company for a long time now, 15 years or so, which is that we think when we look at segmentation projects, when we look at firewall deployments today, and you all use the word firewall very, very generically. I mean, I think obviously, we all think about a traditional firewall or a next-gen firewall, and those clearly fit into a category. But to me an AWS security group or a set of IP tables on a Linux platform, those are all firewall technologies to me that are helping us to achieve segmentation, the micro perimeter, all of those things.

Matt Dean:
But what we thought at FireMon a long time ago was that that technology is very good, and it’s of course evolved in next-gen firewalls and all are great and great capabilities. But they are all only as good as the policies inside, the more effectively how we as humans can control them and get them to do what we want them to do, right, which is balance our access and risk. As we look today at organizations trying to achieve zero trust, we really see that there is these firewalls and firewall policies. Firewall implementation is one of those things that’s standing in their way, kind of that traditional infrastructure problem.

Matt Dean:
In particular, and you’ll notice the title of my presentation is around frictionless security, and I really in the same way that I’ve seen the continuous delivery guys in the app world go and address the friction around the process of delivering software, I see the same thing in network security today, or I see many parallels to those same ideas. We’ve got to work to reduce some of the friction. That’s what we’re focused on at FireMon right now with our global policy management platform. I just want to show you a few of those ideas this afternoon.

Matt Dean:
So first, very simply, and Chase articulated it very, very well, and I’ll do it very simply, which is we see a lot of new models evolving as network architecture, and they are new models with the same old problems, right? We haven’t been able to address complexity control and compliance across those new environments. So for instance, as we look at the challenges of managing a firewall policy, I would honestly make the case that, and I think Chase did as well that the problem has gotten bigger, right? As we go to and look to achieve a micro perimeter, our old segmentation model may have encompassed whatever, 250 kind of very traditional old school firewalls, and our micro perimeter may be thousands, if not tens of thousands of points of control that we need to be able to manage inside the environment.

Matt Dean:
We need a way to do that, as Chase said, at speed and scale, right? Well, we got to go do this and do it for real and do it well. So again, for me, this is about reducing the friction inside of this process. In particular, I’m talking about this process that’s on the screen right now, which is kind of my representation of a very traditional network security management process.

Matt Dean:
I’ll start by acknowledging certainly as someone who’s operated network and network security infrastructures in the past, I know where this came from, and I know where this process came from, right? We started out. Every company would get their first firewall, and one person would manage it, and the policy was small, and everything went pretty well. Eventually, you got to the point where there were multiple firewalls, and there were multiple people on the team, and then outages started to happen because the complexity got to that point where you could no longer visualize and easily understand exactly what was happening.

Matt Dean:
Then along came compliance, right? So first we had this operational problem with things breaking, and then we had compliance where everybody said to prove that the access model was what it should be. Those two things, more than anything else in my mind drove us to this model, which is a very serialized human-driven workflow for managing our network security access. I don’t believe this gets us to the new world. I don’t believe we can take this model and use it to spring shot us into to a place where we are doing zero trust, to a place where we have micro perimeters that are doing very, very low-level segmentation.

Matt Dean:
So what do I think the answer is? Again, I harken back to this notion of frictionless security and reducing some of the manual friction that we see inside of this process today. We do that at FireMon with a platform called Global Policy Controller.

Matt Dean:
What Global Policy Controller is all about is separating our intense or the access we intend to have from our implementation. I would tell you, this is, having been a guy who’s worked at FireMon for over 14 years and seeing thousands of enterprise class firewalls all across the globe, if there’s one thing I would tell you that is consistent is that our implementation of the policy inside of those firewalls is extremely tactical. All right?

Matt Dean:
We go and we take and we create rules that tell you this IP can talk to that IP on this particular port. Of course, as you get to next-gen platforms, you can do that a little differently, and yet the underlying concept is much the same. So now we have these incredibly dynamic networks, where IP address has changed, sometimes every 30 minutes. Here we are in the network security side still managing everything as IP to IP. I think that is at the heart of our problem.

Matt Dean:
So when we talk about two things, enabling and automating, the first thing that we want to do is talk about separating out how we define what access we need from the context of the network that is continuously shifting and then going through an automation process that can take us from an intense or an understanding of that abstracted definition and requirements all the way through automating that implementation.

Matt Dean:
So at GPC, what we do is first focus on defining that intent. You can see here in my picture, I think of it is that security, intent, orchestration platform where that yellow line up at the top helps us to define globally what our security posture should be. Effectively, the way we do it with intent is we talk about who can talk to who. But instead of doing that as an IP to an IP, we want to do that in an abstracted way. So maybe application A needs to talk to application B, or a set of users in Asia needs access to a particular service that resides in the corporate data center or that is now moved to a next-gen cloud platform or that is being spun up in a new software-defined data center.

Matt Dean:
That is how we want to think about defining what it is that we need. Again, we call that intent. But if we can take that as our intent, we can then understand changing network context and have that be calculated automatically. Let me give you a quick example of that. So for instance, when we have that brand-new DevOps team who is deploying applications into a very robust software-defined data center in a continuous delivery kind of model, they may a little elastic capability inside their application, right?

Matt Dean:
So they may have five database servers today, and tomorrow they need to have 50 to handle the capacity. What’s interesting about this intent model is that if we can understand that those 45 new database servers belong to a particular application, and they are serving the function in the database tier, in a lot of ways, we already know how to go implement a security model around that.

Matt Dean:
So we can just simply take it, and we know we know what to do with the five. So the next 45 follow through, and they follow through in that same pattern. What happens is you start to get to the point where now not everything is a “change”, not everything is a new instantiation of that really complex, really serialized process that most of us use today to manage our way through and organize our thoughts around our security context and our security management.

Matt Dean:
So this abstraction, stepping away from our definition, is really important. What I think it does is it allows us to get to a point where we can define our intent appropriately and then have it implemented automatically inside the appropriate platform. This is what we have focused on at FireMon for the last 15 years, which is integrating with a lot of different firewall platforms to enable that kind of third-party management, where we don’t really have to care.

Matt Dean:
Of course, we do internally. But our users don’t have to care exactly what that implementation is going to be exactly who the vendors going to be, all of those things. This is how we’ve achieved that. So I mentioned the kind of separation between our intent or particularly our business intent and our context or the context or implementation of the network. I would tell you that those two things by themselves give us a very important step forward, an important step towards this new world, where we really can manage a set of requirements, as intent requirements that can get us into a world where zero trust is possible and manageable.

Matt Dean:
However, it is not enough just to say that we can separate those two things. So you see the third leg of my triangle down there as policy. What we’ve done is injected policy into this process directly and allowed you to say that it may be that my DevOps team is for instance, defining the intent because they should know, right? They should know who their application needs to talk to. They should know who their users are. They should know the structure. They should know the ports and protocols and applications that need to communicate.

Matt Dean:
But what it doesn’t mean is that they get to go do whatever they want. They don’t get to allow their application to talk to anybody they want to. Security and oversight and risk management is still at the table, and we implement that in the form of the policy, which effectively prevents anyone from defining a set of intent or configuring the network in such a contextual way that it violates our security principles.

Matt Dean:
So those become the three pieces of the three inputs that allow GPC to convert us from this very manual 15 or so step workflow process into an agile model where the friction has been reduced to these three inputs and then automated through what we call the policy compute engine. So what Global Policy Controller does is it takes those three things, intent context and policy, and it forces them through a black box. What that black box that compute engine does is it calculates all the necessary access on the network and then uses an automation framework to push that out directly to the underlying network itself.

Matt Dean:
Now, there’s a couple of things that are worth noting here, right? As we work towards getting to this frictionless model of continuous security, we are going to give up some things. For instance, our compute engine, it calculates the access as minimally as it can, but then it interjects it into the firewall policy, and it does it in its own way. Right?

Matt Dean:
So for instance, we have to step away from some of these ideas, like we’re going to organize our rule base, which is a really good management technique, by the way. However, when we’re working towards this frictionless model, we’ve got to work towards understanding our intent more so than our implementation, and our implementation becomes less important, and that has a lot of implications around compliance and oversight and management, but it is a very, very important conceptual first step to getting to a place where we can manage these.

Matt Dean:
We also, inside of Global Policy Controller, have done some unique things because honestly, we’ve seen in our industry, a number of platforms come and go that required their platform be the source of truth. For us, we thought that that was a requirement that most organizations couldn’t live with, that every single change had to flow through this one particular interface and this one particular engine. There was still a need to manage some things directly and manually.

Matt Dean:
Certainly, as we think about early adoption of these kinds of technologies, the great example where manual management is going to continue. Well, what we’d done is we’d focused on being the source of truth for only those applications that Global Policy Controller manages. What it allows us to do is to not only give you that frictionless perspective for your really dynamic environments, but also alongside it manually manage access where it makes sense in a robust legacy environment. I mean, Chase mentioned it, right? Those firewalls that have been around for 20 years that were kind of scared to touch because it breaks, or things have become so brittle that things break when we go and manually touch those. We understand that those environments are going to exist. We understand that they need to be managed in some way, and manual is completely fine.

Matt Dean:
We are able to we are able to co-exist with that management infrastructure, as you begin to define your intent and adopt this new frictionless model. It should not go unsaid, right? We understand that most organizations don’t understand their intent today, and that is a process, and Chase outlined it really well as he went through kind of the flow of understanding the hierarchy that gets you to zero trust, and you’ve got to be able to add that asset-centric model. You’ve got to be able to have a comprehensive hierarchy for understanding your information.

Matt Dean:
As you do that, you really think about it on the network side as building your intent and your intent model. What GPC can do then is help you implement that and help you bring it to an operational model that you can use day in and day out.

Matt Dean:
Ultimately, when we get to a place where GPC is the backbone of our network security management, we achieve a number of things. The first is dynamic policy change. What I mean by that is that when the context of the name of the network changes, the policy can change right along with it without a manual process to go through as long as our security and risk policy says it’s okay. So that dynamic policy change is incredibly powerful. We’ve seen in some instances for folks who we’ve worked with, as we implement GPC that as much as 50% of the number of requests go away, because a lot of those requests were not about to change an intent. They were just about to change to context and implementation.

Matt Dean:
So it’s a very important concept, and it really allows, whether you’re doing a cloud virtualized environment, which already has some of these things built in, or you’re doing a traditional on-premise or a little bit of both, you can achieve that across that entire hybrid environment.

Matt Dean:
The second is this notion of embedded security. I think it’s a very important value proposition, which is that we can build in security oversight into this process without a human, and we can do that proactively. We can do it ahead of time. Essentially, the way our engine works is it allows you to do that. It allows you to say these are the things that are okay to communicate on the network. So just as examples, you can broadly say, “Hey, every web server on my network can communicate as long as it’s communicating with HTTPS.” Right? Or it can talk internally only over HTTPS. Or you can define that broadly.

Matt Dean:
But our embedded security model also allows for exceptions. Ultimately, where we want to get to is this place where you only manage the exceptions to your network security policy, not manage every single request that comes through.

Matt Dean:
So what the embedded security engine allows us to do is to pre-approve and pre-disapprove those scenarios we know about ahead of time. For everything else, we revert back to a human, and there is a one-step approval process inside of GPC that allows eyeballs to come look at those things we didn’t anticipate and approve them and then of course evolve the policy, so for whatever we didn’t anticipate, it can anticipate it the next time.

Matt Dean:
Intent translation really is at the heart of GPC and how we think organizations can get to a zero trust model by defining that abstract intent and then allowing implementation to flow from that. Then ultimately, Chase mentioned automation, orchestration, automated distribution of the policy itself has to be part of this. For us, that means we’ve got to be able to communicate with all those different devices that are implementing the policy, and we do that inside the product itself to the point where, for lots of normal cases, you don’t have to worry about what the rules on the firewall are anymore. It’s handled for you, and you can certainly still go look at them, but the platform is now providing an abstracted way to interact with those things, and the rules themselves become less interesting.

Matt Dean:
It is probably an important point and one I talk about with our clients often, which is that I think so many of us in-network security spend so much time looking at our firewall rules because our firewall rules are the best source of documentation about what our policy is, and that is on one hand true and on the other hand not a great place to be. We should be able to communicate what our policy is in some way, other than looking directly at the implementation. But for lots of organizations, that’s not where we are, and certainly, I’ll speak for myself at firewalls I’ve managed. Going and looking at the rules provided that intrinsic value.

Matt Dean:
I will tell you that going and looking at your intent is much better. You’re going to look at many fewer entities, and they are going to represent the business communication you want to happen, not the IP-to-IP communication that ultimately ends up implementing that.

Matt Dean:
All right. So you can learn more about GPC in a couple of different ways, the first, right, coming to certainly recommend our website to you as well as the document links that are attached to the webinar. I encourage you to use. So with that, I will turn it back over to Lindsay.

Lindsay Brechler:
Thanks, Matt. Get myself unmuted here. Yeah. So I just want to remind everyone that you can submit impressions that you have in addition to the ones that have already been submitted here through the ask a question link at the bottom of the web browser there. So let me check off a couple of questions to you guys, if you would. We’ve had them come in. Let’s go ahead. Okay. So the first question came in at the beginning of Matt’s presentation. So it may refer some to Chase’s as well, but Matt I’ll ask you first maybe. How hard is this to do in a multi-vendor environment? How hard is zero trust to do when you have multiple vendors?

Matt Dean:
First of all, it’s a great question, and I think, as we approached the problem, and obviously we’re a small component of the overall zero trust framework. There’s a lot going on besides network. But as we looked at the network, multi-vendor was a given. We had to go do this. We had to go achieve that. Kind of separating your intent and supporting that model for a single vendor really doesn’t fit the needs of very many organizations at all. I will tell you, from my perspective, it is a challenge when we recognize that.

Matt Dean:
Here at FireMon, it’s what we’ve been doing for the last 15 years. It really is the way we’ve focus on multi-vendor environments, we’ve focused on integrating into all these different platforms. Honestly, we have the framework. It’s more than just a technical framework. It’s about business relationships with all of these vendors. So we understand when changes to their schema are going to impact us, and we can do that ahead of time before version rolls out to the field.

Matt Dean:
All of that infrastructure, we have it, and we have it at FireMon over the last 15 years. So as we looked at this particular challenge, the challenge of kind of this frictionless security in an agile environment, we are relying on a whole lot of infrastructure and history to achieve that, but we have achieved it, and it’s really important.

Lindsay Brechler:
Thanks, Matt. Maybe just a follow-on question from me here. Just do you feel like cloud changes that approach, or is it just another way to think about multi-vendor environment?

Matt Dean:
Yeah. So cloud, right, I mean, we all sometimes think of cloud differently. From my perspective, it is a little bit of the latter, right? We are now implementing security frameworks in cloud or virtualized environments in a number of different ways, right, from a platform like NSX from VMware that can natively do this in a software defined data center, right? Natively has network security controls. We’ve also seen, for instance, organizations that are going to AWS, and they are layering on traditional firewall technology in a virtualized environment. So you may have AWS security groups piggybacked with a Palo Alto, a virtualized Palo Alto.

Matt Dean:
All of those things are incredibly important, and it’s incredibly important that you have direct control of all of that, regardless of your architecture. That’s really what we’re working towards is to get you to that place where we can abstract away some of the nuances of these vendors to the point where in a lot of ways, you shouldn’t care about what the implementation is. There’s an engine. There’s a black box, that compute engine that can handle that translation and that implementation piece for you.

Lindsay Brechler:
Great, thanks. Okay. Next question, I think. So Matt, this is one to you again, I think. Zero trust being such a wide topic to FireMon, specifically, is there a particular part of the zero trust architecture that FireMon fits into broadly or for a specific product?

Matt Dean:
Yeah. I think it is right. I look at the zero trust framework, and it is so wide-ranging, right? For all of the important reasons, I think we have to achieve this notion of zero trust. You’ve got to attack a lot of different angles, and Chase certainly knows this better than I do. For me on the network side, and being the old firewall guy that I am, I think, as I look at those points of control in our network, they are a very important part of this story. They’re certainly not the entire story. Chase, do you agree with that? Did I get that wrong?

Dr. Chase Cunningham:
No. I mean, I think that everything you’ve said is dead on. It’s just the growth and sort of the volume of ways you can get things wrong, honestly, that’s a big issue that people kind of don’t pay attention to. I mean, yes, you can do things piecemeal, and you can move manually. But you won’t ever get yourself out of that hole if you don’t kind of change the way that you approach it. The difference that I see with organizations that are doing well in this space is the ones that have aligned themselves with the strategy, because it just makes things clearer and everyone can fall in line with understanding why they’re doing things.

Lindsay Brechler:
Okay. Thanks. Chase, this next one is to you. So I think this one is referring back to that circle diagram you had with the various parts of that zero trust phasing. So looking at the data classification, are there classification schemes or approaches that you’ve seen work particularly well to implement VGX?

Dr. Chase Cunningham:
I have not seen anything that is the end all be all in that particular area. But what I have seen work is the organizations that take the time to sort of step back and say, “Okay, do we really understand all of the areas where our data lives? Do we understand the value of that data? Are we able to come up with some kind of plan and put it in place for how we’re going to schematize that information?” I mean, the beauty of the zero trust sort of Avenue here is that it can be down to your organizational level to make that decision. It’s all yours. You just have to make it yours.

Dr. Chase Cunningham:
Some companies have put data schemas together, where it’s red, white, and blue. Some have done it where it is unclass, class, and top secret. It doesn’t really necessarily matter how prescriptive you are in the way that you’ve put a schema together. But it’s just a matter of knowing what the data is and where it is and the value it has and then being able to put something in place to understand what you’re doing with it and why.

Lindsay Brechler:
Great. Yeah, and Matt, so on the GPC side, knowing what I do, can you talk a little bit about using that data and those classifications to define the intent? Maybe I’m thinking tags, but maybe there are other ways.

Matt Dean:
Yeah. No, I agree. As we talk about intent and as we kind of work with folks who are going through this notion of intent definition, this is kind of… When you get to that question of where do you start, this is kind of where it is, right? It’s about understanding the data that you have on the network, understanding your infrastructure, your application hierarchy. All of that starts to become kind of the key concepts around how you build a set of tags that represent your hierarchy inside of GPC itself.

Matt Dean:
So really, you follow along from that hierarchy directly through to implementation. We use tags to do that inside of the GPC platform. But when you think about that, right, I mean, that abstracted notion of what it is that you want to manage, it flows kind of directly from that classification in the hierarchy there.

Lindsay Brechler:
Oh, I’m on mute. Sorry, another question here that is just basically, is this basically security automation. Is there more to it? Is it different than the way that we’ve talked about automation before, and I guess, Matt, to you on that?

Matt Dean:
Yeah, it is, from my perspective, right? When I look at most automation projects going on, particularly in the enterprise today, what I see is, and if you remember back to the slide that was kind of that workflow process, what I see is people taking that notion and trying to make it go faster, right? So they’re looking to automate steps inside of that, for instance, some of the engineering facets of that so that the process can go fast enough that you can achieve the automation that you’re looking for and reduce costs and reduce the implementation time and all of those things.

Matt Dean:
What I have found with the clients that I’ve worked with is at some point, that concept breaks down. You can only go so fast when you were using, first of all, requests and a bunch of individual requests from end users that now all have to float through that process. Even if it could go faster, we’ve seen that it can’t go fast enough. Really, the difference for me with GPC is stepping away from that notion of everything, being a request, building in that intent hierarchy that allows us to not have the churn necessarily that we do in that process today and then allowing the compute engine, as opposed to a human being to understand what it is that needs to change and go and directly make that change?

Matt Dean:
That’s a big mind shift for people, right? But when you can get there, and for the customers that work with us on this today, they see the engine itself basically creates the same access that they would have done anyway, but it does it very, very quickly, in a very agile manner. Not only does it put rules in. It takes them out. So as servers get brought up on the network, it spins up access. As they’re decommissioned, it removes it. All of that happens without a human in the loop at all.

Matt Dean:
So really GPC, yeah, I think it’s different than kind of traditional automation, and it is about that process and attacking what I think is at the heart of the problem, which is how serial that process is and kind of how long running it is.

Lindsay Brechler:
Okay. Thanks. There’s one question left that we didn’t get to, but I think we are out of time. So I want to thank both Chase and Matt for your time today, for everyone who’s reviewing the webinar. I just want to remind you also that the slides are available for download there under attachments and links, as well as an e-book. The webinar will be available for replay shortly at the same link that you’re using now. Feel free to share this if you think it might be of use to someone else that you know, and we’ll talk to you soon. Thank you.

Read more

Get 90% Better. See How to Get:

  • 90% EFFICIENCY GAIN by automating firewall support operations
  • 90%+ FASTER time to globally block malicious actors to a new line
  • 90% REDUCTION in FTE hours to implement firewalls

SCHEDULE A DEMO