5 Steps to Keep Network Security Enforcement Points Secure and Up-To-Date

On-Demand

Video Transcription

Randy Franklin Smith:
Good day everybody, Randy Franklin Smith here and today, we’re talking about keeping up when workloads get moved around or new workloads come out or workloads are retired. In fact, that may be a surprise unless you really think about it, that there is important stuff to be done, network security policy-wise when a workload is retired. Otherwise we’re leaving holes open and creating security risk.

Randy Franklin Smith:
And that’s what we’re going to talk about today. In fact, we’re going to talk about a method, a process to follow to keep our global firewall policy up to date as workloads shift and move around and migrate. And we’ve got the perfect sponsor for this, FireMon is making today’s real training for free possible. And I’ve got 10 words back with me, again, and also let me introduce Josh Williams, these guys eat, sleep and think firewall management. And when we say firewall today, folks let’s think about anything with a network ACL in it. So that’s not just a classic checkpoint firewall, but that’s of course, your next gen firewalls, but it’s everything else, router, switches, VPN devices. Well, wireless access points. It goes on and on. And guys, thanks a lot for making today’s webinar possible.

Tim:
We’re happy to be here, Randy. Thank you for having us.

Josh:
Yeah, thank you.

Randy Franklin Smith:
And by the way, folks, you’re going to see some really awesome technology today when these guys take over. So here’s the five step process. First of all, we need to know when new connections happen and we’re not talking just about physical network connections like connecting to a new network or adding a subset, we’re talking about a lot more than that. So when I say connections in today’s webinar in that context, I’m talking about logical connections. Then we’ve got to understand the traffic requirements. We need to really understand the devices that need to communicate with each other at both ends of this workload connection. And then we need to think about what is the risk or security differential between what we’re connecting to and what we’re connecting from. And then what might go along with that is, do we need additional enforcement points? And by that we mean some type of a firewall.

Randy Franklin Smith:
Folks, please use the question window and share any and all feedback with us. We want to hear all of it. And as always I’ll work as much of that in as we can. And let’s see here. So let’s dive into the first one, identifying new network connections. The first thing that I want to get across about this is that this is really an event that needs to be recognized when it happens in the organization. The network connections are going to happen whether you have identified it as a fact or not. Unless you’re a very, very small IT team where you know everything going on, you’re all in one room and you can overhear conversations. Connections are going to be rolled out and you as the security team or the firewall team may or may not know about these.

Randy Franklin Smith:
So oftentimes these new connections happen and folks don’t know about them. That’s yet another, by the way advantage of or dividend that you reap with good internal segmentation is folks can’t just add, slap something else on to the network. And it work because you’ve got segmentation and least privilege or zero trust model going on with your network routing. And so you’re going to have to find out about it before anything can communicate with it. But guys, you work with customers on issues this all the time. How often do customers find out that, wow, we’ve got connections to stuff we did not realize.

Tim:
Yeah this is an exciting topic, Randy, because I think everyone at some point or another will struggle with this has been struggling with this or definitely will run into it in the future especially as we look at the velocity and the acceleration that’s going on in the network world today, especially as people can embark on their digital transformation journey and cloud first strategies and things that. So as we’re deploying assets and resources and applications or purposing access to those assets, resources and applications, we need to identify what’s in scope and how we’re going to do that without having to do the manual paying trace route, SSH, Telnet, all that other kind of good stuff. But yeah, even more important is finding out what’s already there, right?

Tim:
You can’t secure what you can’t see. And it’s very hard to manage what you don’t know about. And so understanding what that access and that connectivity looks even for the existing things. Sometimes as you’re exploring how you’re wanting to purpose access, or deploy those assets, you stumble on these things. You’re like, hey, what is that doing there? And what do I do about it? And how is it secure? What are the data controls associated to that? So, yeah, very interesting challenge sometimes to figure out.

Randy Franklin Smith:
Thanks. John, I want to address your question real quick. John asks does windows log an event on a new connection? So John, today we are really on a very different plane than the windows operating system and logs there. We’re talking about with your global network. When someone, for instance, let’s go through the bullet points here sets up a site to site VPN connection or a dedicated network connection to a subsidiary or a business partner. As an example there, think of the HPAC contractor who had a network connection to Target or we’re also seeing more and more and even more either VPN connections or direct connections to cloud virtual networks. So in Azure you can use infrastructure as a service. You’ve got one or more virtual networks running up in Azure, and then you can connect to your on-prem network up there to the VMs in Azure with either a VPN connection, or let’s see, I forget what they call it, it’s Direct Connect or Express Route or something that.

Randy Franklin Smith:
But then there’s other… So those are what I would call, I don’t know packet routable connections or what we would call a network connection in the traditional sense. But I also want folks to think about things this cloud application gateways. In fact, I’ve got a couple of slides to help demonstrate the difference between these, but before I dive into those two things, let me throw out one other possibility there. And that is maybe it’s not really a new network connection, but a new workload is deployed at the other end of an existing network connection. So here’s just one example. You already have an express route or a VPN connection to your virtual network up in the cloud at Azure or AWS. Great. But then you suddenly deploy a new virtual machine up there in the cloud.

Randy Franklin Smith:
This VM hosts a website published to the internet. Okay. So we don’t have a new network connection, but we have a new workload out there on that segment. And it’s exposed to the internet and chances are we need to put some additional enforcement points in there so that if that thing is compromised, the bad guy doesn’t have direct and unfettered access to everything on the other side. So it’s not always another connection. It could be a new workload out there at the other end of an existing connection. Josh, any thoughts about that? Do you see that kind of thing coming up?

Josh:
Yeah, absolutely. I mean, the idea that workloads are going to move in that we want to be able to have knowledge of the connections. So if the workload moves from on-prem to cloud or from even some on-prem DC to another data center that you might have, we want to be able to have knowledge of that so we can make smart decisions because the difficult part of that is that you have many people that are wanting to get in a room and say, okay, we move this workload, how do we then make the correct security policy adjustment because the routings are going to be there.

Josh:
So how do we make the right security policy adjustment? And my past experiences has been that it’s sometimes leads to arguments, just people trying to prove others wrong on what their security policies actually going to do or what they think it’s going to do. So yeah, the issue starts to really arise when the workload start to migrate. We’re going to cloud, we’re seeing it across the NSGs and Azure, the security groups in AWS. We want to make sure that we can be smart about actually applying to that security group and knowing whenever it goes to this location, wherever that is in the cloud on prem that we can make smart decisions. And that’s just an enterprise struggle right now is how do you make that smart decision?

Randy Franklin Smith:
So how do you know when this event occurs? Well, you’ve got to have your tentacles or your antenna out there detecting either technically or from a business process level when these things happen.

Randy Franklin Smith:
One of the most important things is to be plugged into change control workflows, and be thinking and be questioning, being surfacing the issue each time a change goes through where you could surmise that… I bet you, this is a new network connection is involved or a workload is being stood up here or moved. So we need to capture this and then trigger the rest of the process. How could you technically detect this? Well, to the extent that you have network monitoring going on recognizing new applications, recognizing new flows of protocols and certainly recognizing new source and destination addresses that were outside of what you understood the end use address space to be in the past is another way to detect when these connections happen.

Randy Franklin Smith:
Other than that vulnerability scanning and network foot printing is probably another really good way to detect, hey, there’s stuff out there or there’s connections out there I didn’t know about before. But like I said, you’ve got to think about in your organization, what it takes, certainly whenever we stand up new technologies but all of this hopefully should be falling through some kind of change control workflow. And I think plugging into that and analyzing and recognizing when this event is tied to change is going through that workflow is probably the most systematic way to go about identifying, hey, there’s a new connection or a new workload, we need to assess the security of it. Now, let’s just talk about the difference between classic connections to another network out there, or this other thing that I’m seeing more and more and more, which is a web application gateway.

Randy Franklin Smith:
So here is a normal connection to a cloud network. We’ve got a VPN or a dedicated connection that’s going out through our internet connection or the periphery of our network and hitting another network out there. But how about web application gateway? So every time that you implement a no web application, you need to think and recognize when a gateway is being installed on the network. So here’s one that I’m most familiar with, and that is cloud-based log management solutions. Inevitably they have you download a virtual appliance or they ship you a physical appliance that you plug into your on-prem network and it calls home out through the firewall. There’s no VPN, there’s no direct connection. It’s just doing HTTPS connection out to the cloud application, but what’s really important to realize here is that thing has popped up like a gopher hole in your network.

Randy Franklin Smith:
And anything that compromises or happens to the cloud vendor can suddenly come back down the pike and pop up out of that gopher hole and then hit the rest of your network, unless we’ve put controls around that. It’s been a long time since I’ve talked about this, but Josh or Tim, are you seeing customers concerned about this? And I only named one example, a cloud-based log management solution, but there’s lots of others where you deploy a physical or virtual appliance on your network that then provides the point of presence for that cloud application.

Josh:
Hey Randy. Yeah, we have actually just talked to a government customer that had an issue where they, they did two things. They had an application that moved and they never adjusted the security policy. So almost exactly to what you’re speaking about now, didn’t adjust the security policy or didn’t tear it down after they moved their workload. And so they had an open socket that attackers actually use as an attack vector to get into their network. So we’re seeing this a lot through our customers.

Randy Franklin Smith:
Cool. Okay. So we’ve identified that there’s either a new network connection, a web application gateway, or a workload that changes the threat landscape on a network.

Randy Franklin Smith:
So the first thing to understand is what are the protocols actually involved? And sometimes just knowledge of the workload and your familiarity with the technologies involved will tell you all that you need to know, but oftentimes we need to consult the documentation and vendors are getting better and better about documenting and being very transparent and forthright about what the network traffic requirements are. So obviously using that, but at the same time, to me, there’s no substitute for sniffing the packets and actually looking at what’s going across the wire. Now, we need to compare all of that because for the period of time that we observed and the amount of traffic that we observed, we may not see the full picture.

Randy Franklin Smith:
And the documentation may point out that every 24 hours there’s this kind of sync or backup process or whatever. And oh, by the way, that uses a different port. So using multiple witnesses, multiple sources of truth help you get the full picture here. And the other thing that could be said is for sniffing the packets it may be tempting to just run it for a few minutes and see what it captures. But again, you may not get the whole picture. So if possible, running it for a longer period of time for what you would consider for that particular workload, a full usage cycle or a full business cycle is just going to help you identify any traffic requirements that weren’t immediately obvious and help prevent an outage down the road. All that being said though, more and more of the traffic, regardless of what the application or technology is, is just good old HTTPS, or at least stuff going over port 443 which is good and bad.

Randy Franklin Smith:
It’s good in that it simplifies network rules. It’s bad in that everything looks the same on the wire if you’re looking at port number and in fact it’s completely different applications and may have nothing to do with HTTPS at all. But the fact of the matter is that’s how it is. So again, the point here is understanding the protocol requirements. Now, closely related to that and you could even argue that we could have made at one step is understanding the end points on either end of this workload connection. So here, maybe this is up in the cloud, this top network, or maybe it’s a business partner or subsidiary or whatever, knowing who on our network and who on the target network really need to communicate with each other. Because what our idea is our hope is to avoid just opening up the entire address space on either end with the rules, especially coming in.

Randy Franklin Smith:
We don’t want, if we only need to talk to this one orange end point down here, then we should endeavor to make the rule allowing the communication to limit it to that one end point to the extent possible. But I think something that folks don’t think about is restricting the source addresses on the trusted entity or business partner or cloud app or whatever, and what would be the value of that. So oftentimes we’re focused on protecting our targeted end point, but if we know that we’re only going to be getting traffic for this particular workload from these two end points, then if we restrict our network policy, then to that what does that do for us? Well, if these other end points are compromised the bad guy’s not going to be able to directly target us down here through this aloud connection.

Randy Franklin Smith:
The bad guy is first of all going to have to compromise one of the only two end points that we’ve opened up traffic for. So it’s not just about protecting access to your end points, it’s about limiting how many end points on the source network that can attack you. Any thoughts on that specifically quickly guys.

Randy Franklin Smith:
It’s okay if you don’t. All right. So let’s see here. I just think I was thinking that you guys might have something to add there, just because I think just this factor is we’re focused on the target and we don’t think about it’s not limiting the attack surface, it’s limiting the attacker surface, I guess you could say.

Tim:
Yeah, sorry Randy, I was having a technical difficulty with my mute mic there. This becomes all part of the context of the policy. And Josh is going to talk about this later when he introduces us to our global policy controller technology, which I think it’s going to be a real treat for everyone to see.

Tim:
But that context of who needs to talk to who, where are they coming from? What do they need to have access to? Overwatch ports? All of this forms the policy that we’re going to need to create that instance of enforcement rule at some point, wherever we are allowed to enforce or wherever we can influence enforcement of the data controls around that data. This all becomes part of the context around that that we’ll need to develop.

Tim:
And at some point later on here, we’ll talk about how we make that more of a collaborative conversation to around these things, because somebody owns that, right? Somebody has responsibility for that workload. Somebody has responsibility for that asset, that resource there’s a business owner. There is a stake holder. There is somebody who has responsibility and interest in is personally enrolled in that data. So we were talking, I was going to comment earlier, too, when we was talking about port 443 and SSL, it is a part of life today that almost everything is encrypted. But even having said that we have nefarious individuals out there looking for the stuff that you wouldn’t think so, but there are bad guys, bad actors out there, just not even hacking I wouldn’t call it.

Tim:
They’re just scanning the network, public IP addresses, looking for exposed resource connections and database connections and data connections that are unencrypted. And again, I mean, today you wouldn’t think that that would be the case you would think almost everything is either encrypted or has encrypted connection before you can grab access to it. But that is not the case. We’re seeing it every day where people are having their stuff, either hijacked and or held for hostage, because it’s either not encrypted or it doesn’t have the right data controls around it.

Randy Franklin Smith:
I’m getting some great questions or some of them are just thoughts. First of all, Jason says, are we to expect the network teams to thoroughly document all of this, the connections and the approvals and stuff like that. He says, he’s thinking from a audit perspective. So that probably lights up your bells Josh and Tim, because I mean, that is one of the needs that you guys recognize with your technology. We don’t want to get into it too deep here, but that is a big part of this whole discussion, right?

Tim:
Go ahead Josh.

Josh:
No, I was just going to say it absolutely is that coming from a network engineering and security engineering background that is something that I always dealt with was the auditors. Right. And I always talk about in our design discussions is that we have to be prepared from the audit from hell, right. That’s kind of the concept to always keep coming up with is that’s because when it comes to documenting the last thing a network engineer, a security engineer wants to do, and let’s just be honest about it. The last thing we want to do is actually have to sit down and go through and document this.

Josh:
So what I will make sure run through when we pull up the product is I’ll make sure and show you the list of documentation that we already provide pre-populated ready to go. So when you’re building these global policy for all of your enterprise security infrastructure, how you can ensure that when that “audit from hell” shows up, that you have all the documentation you need, not just documentation to prove that it’s not happening, but also documentation to show you that how we’re going to keep it from happening. So we put it on both sides of the rule publication for the engineers.

Tim:
If you don’t put it in the context of the policy, Randy, if you don’t attach it to the actual security policy, if you’re relying on something external, whether it’s a CRM and listen, today, we still see email being used. We see word documents being used. We see kind of homegrown databases being used. We see spreadsheets, oh my God. If I told you the number of spreadsheets that I see for tracking changes in security documentation, it might surprise you, but we still see that. But all of those falls short, because exactly what Josh says, you’re relying on the human element to keep that updated and to keep everything in sync and, again at the velocity and the acceleration that things are moving and dynamically changing today, it just isn’t realistic or feasible that that documentation is going to stay in sync. And so wherever you can automate the documentation process, you will increase your probability for having accurate timely documentation against your security policies.

Randy Franklin Smith:
Yeah. And here’s the thing though. I hate doing something just because I’m going to get audited. I mean, it’s true. And that may be worthwhile just from a dollars and cents and time and an expense point of view. But so often times, if we can use audit requirements and the audit burden to improve our processes, we have a lot of other security dividends. And so just one of them is if we’ve got tens of thousands of firewall rules, and we don’t know why that rule is there, we’ve done other webinars in the past with you guys to just demonstrate all the risks that come from not being able to document, not being able to justify a rule, because if you can’t justify a rule and why it’s there, then you can’t determine if the rule still needs to be there, or if it’s actually opening up a security hole and a risk as opposed to allowing something that’s legitimate.

Randy Franklin Smith:
So I think the audit thing may be the pain point that causes us to stand up and pay attention. But there’s a whole lot of other benefit going on here if we can address the stuff that’s coming up. Let’s see here and everybody loves the term audit from hell. Doug says, yeah, it’s the audit from hell for the bastard operator from hell. Yes, Doug, I’m a real fan of BOFH, he is hilarious. Let’s see here. Yeah. Now, this is really interesting. Doug says as a potential case in point, why are there so many TCP 443 connections for this go-to webinar session?

Randy Franklin Smith:
And that’s a great point. Go to webinars got all these connections on 443, and it’s just one webinar session going on here, but I’ve actually looked into this before and there’s one for the webcam, there’s one for the dashboard, there is another for chat and questions. There’s another for audio. And then there’s another for the screen sharing, stuff like that. So that accounts for why so many connections, but it’s a great point. And they’re all going to different addresses.

Randy Franklin Smith:
Oh, Aaron says, I’m thinking micro-segmentation and NSX here. And so, yeah, totally. I think micro-segmentation is great, NSX is very powerful. But since you bring NSX up, that’s just your BMs. We’re talking about limiting networking activity on your on-prem network beyond just between your virtual machines. What about all the other network zones out there? And let’s see here. Okay. I think that takes care of all of them. That was great feedback. Thank you everybody. So the next step then is what are the risks differentials between these two areas? Now, does it really matter? You might say, well, ideally, each zone of your network should be protected by least privileged or zero trust firewalls. And so only the ports and source and destination IP addresses that really need to talk to each other should be allowed to talk to each other.

Randy Franklin Smith:
And that’s what we were saying just now about micro-segmentation and so on. Well, that’s true, but here’s some things, number one, that’s ideal, but realistically, our network rules are seldom, seldom, seldom, that’s strict.

Randy Franklin Smith:
So it is still important to understand how risky and how exposed and how dirty, if you will, is this network zone, as opposed to this one. These could be zones within the same, even the same premise or the same on-site premises, right. That has nothing to do with it. It’s of course, about much more than that, or by the same token, these could be on opposite sides of the world. So the geographical distance or what is the connectivity between these two zones is completely separate. And you notice I’m using the word zone instead of segment, because all those network segments are often map to what I’m talking about as far as different security zones. That is only an artifact of network routing. It doesn’t have to be that way such as with micro-segmentation. So the other thing though, that I want to get across to is that routing rules is only a subset of your available network controls.

Randy Franklin Smith:
So what do I mean, what are some examples of that? Well, let’s say we do have a beautiful to the limit zero trust model implemented so that every end point can only communicate with those other end points that it really has a reason to, and only on the ports that it should. Well, risk differentials can still want additional controls, for instance maybe we’re saying that our little red endpoint down here can only talk to these or orange to these two orange end points up here on exactly the ports that we wish, but what if these end points are more widely accessible by other stuff, and they are much more prone to getting infected with malware or being directly attacked by other stuff on this larger network up here?

Randy Franklin Smith:
Well, that means the risk level that these end points are exposed to is a lot higher than the risk level that may be this isolated end point down here is, that’s running more of a bastion environment. So there’s a big risk differential here. So it may be that we need to do more than just restrict the packets that can flow. Maybe we need to look inside those packets for things like malware, or look at the traffic patterns, and detect possible attacks, denial of service attacks or whatever else coming through those ports that are allowed. And I’ve got another example of this later on that you’ll see. So the point is if even with good segmentation and certainly if you don’t have full micro-segmentation, then when there’s a significant risk differential between these two zones, then we need to think about this are in additional enforcement points and additional network security technologies warranted.

Randy Franklin Smith:
So you’re deploying a new cloud application gateway. Well, maybe you should put it behind some kind of internal firewall. So let’s just think about our log management in the cloud ideas. We get an appliance from the vendor and we deploy that. Why should that appliance be able to communicate on any port to every end point on our network? If it’s just collecting logs, then let’s say it’s CIS log. It’s just a CIS log end point. Well, then we shouldn’t allow it possibly to reach out at all, unless it’s out there to discover new stuff. Let’s say it’s just receiving events. So then perfect example, let’s put a firewall in front of that thing. And when I say firewall could be any kind of enforcement point, it might just be rules on your switch that only allow it to receive CIS log events and doesn’t allow it to reach out or whatever the requirement is.

Randy Franklin Smith:
We’ve got to identify that requirement. And the point on this slide is to set up the enforcement point. Here’s another example, a new applications deployed on your virtual network in Azure, but that application is internet facing. Wow, it’s time to put and really look at the segmentation, but probably time to add some more segmentation to protect downstream resources. Think of the Equifax hack. That’s just the first one that comes to my mind. There’s many others where the bad guys took advantage of the fact that there was very lax filtering between a web server and downstream database servers, and then downstream servers from those servers. Let’s say your internet email is first routed through a cloud email security gateway, and they do a great job of removing phishing attacks and malware and so on, but then you implement a direct email route with a business partner that goes over a VPN site to site VPN connection between them.

Randy Franklin Smith:
So that you know that email communication with this business partner is encrypted or at least I should say, it’s not out there on the public internet. Well, the thing that we have to recognize there is maybe there’s no new network connection added. We’re just using the site to site connection that we’ve always had with that business partner. But now we’ve got a new workload conversing over that route. And it means that that email, the portion of email to and from that business partner isn’t going through our cloud email security gateway. So there you go and it’s time to look at that. And quite possibly, there’s a big security differential between what your requirements are for email security and that business partner. Now, when a workload moves, so we’re not retiring a workload, we’re not deploying a new one, we’re moving it.

Randy Franklin Smith:
Then everything that we’ve talked about so far applies, but in addition, it’s really important to identify the rules on our existing enforcement points that are now outdated, those rules that permitted traffic to or from the old location of that workload.

Randy Franklin Smith:
If we don’t recognize that those rules are there, are outdated and need to be deleted, then they’re going to stick around perhaps for years to come. Not perhaps they will stick around for years to come because you guys see this all the time I know from conversations. And so what does that do? Well, first of all, slows down firewalls, that slows down traffic. It creates a distraction when somebody is looking at the thousands of rules and trying to understand the current policy. If there’s a bunch of outdated rules there that just makes that more difficult and then more difficult means risk and mistakes. But here’s the other thing, here’s a direct security risk that arises when a new workload is deployed in that reclaimed address space. Suddenly we’ve got rules permitting access to that for an old technology and an old workload, and maybe are completely inappropriate for what is now sitting on those addresses spaces.

Randy Franklin Smith:
So at the end of the day, what we really need is our global view of our network policy, regardless of the technology, whether it’s a network security group and Azure, or a checkpoint firewall, or some something in between. And we need to be able to correlate each rule with the workloads that it facilitates and the intent, the security intent that it enforces. We need to be able to find outdated rules and we need to understand the impact of change. And we need to understand what are the potential routes, the traffic could take with all of my current rules, because then we can map that to things like vulnerability scans and see that whoa, this vulnerability is much more important than this vulnerability, because there’s actually a network rule that would allow bad guy from 80% of our network to target that vulnerability on that particular endpoint.

Randy Franklin Smith:
Well, the good news is there is technology to do that and that’s with FireMon and the thing in particular that Josh and Tim are going to show you is their global policy controller, which is really neat stuff. So do I make you or Tim the presenter Josh?

Tim:
Make Josh the presenter.

Randy Franklin Smith:
Here we go.

Tim:
Just to your last point there too, Randy, I mean, overly permissive rules or overly permissive access is kind of the bane of network security anyway, it’s making sure that you understand what the actual requirements are to meet the needs of the business is kind of step one. And if you can do that, but problem is complexity creeps in overtime and you do get those overly permissive access rules that creep in, and then you do have the HPAC zone allowing access to the PCI zone, which is a big no-no. Right.

Josh:
So, yeah. So I pulled up just a quick slide because when it comes to the way we’re able to do a lot of this just to kind of really take the velocity of infrastructure today, building servers, tearing down servers, moving them. I think it’s really important to understand a few things about just the way our engine works and the philosophy behind the way in which we view it. So I’m a very technical person. I like to talk really technical about things. It just makes me feel better. From setting on the other side of a table, usually the more I can hear somebody talk technically about it, the more it just clicks in my brain. So I want to kind of dive in first to the way we see and the way we’re able to deal with rules and intent in our global policy controller.

Josh:
And then I’ll start to really tie those to the way you build move and then retire workloads throughout your enterprise. So this is kind of go back when you think of actually building and creating firewall policies it’s usually is forced by a request of the business. They say, hey, we have a new server, we have something out there now that needs to have access to X other service. So you usually can go in and you can like I said earlier the way it’s been for me is you go into a group of your other engineers and you say, yeah, it goes across these firewalls or I need to adjust these policies. And they’re like, I think you’re missing something. I’m like, you’re wrong. He’s like you’re wrong.

Josh:
And then we kind of get into it. And then we end up hugging it out and going out to actually execute. It might be right. It might be wrong. So the idea though is that usually building a new services, just taking a lot of time from the actual engineers, there’s a lot of friction points in the organization that when the services requested, well, what’s the SLA? How long do you expect it to actually get built? So that’s just kind of the way requests goes today. So just want to walk you through the way we broken this out into three engines in our current product for FireMon and that’s called the global policy controller. So if you look, we have a request that comes in, right? Simple requests, 192.168.0.1 going to the Google DNS through the DNS service. So a UDP RTP 53 action allow.

Josh:
So we know that’s request right. Once we boil it down to whoever’s requesting it, this is what we know what’s going to be. GPC has three different engines, right? So the engine is the first engine is going to hit is compliance engine. The second engine is going to be the compute engine and we see down here, and the third one’s going to be actually what we call the enforcement engine. And that’s what does the device automation? So if we look here a compliance engine have rules in it, and it’s going to be rules that are gone allow a couple of different things, a few different things. One is, it’s going to say is this rule being met here? Is this meeting this action, if it does, then we fail the rule, we take it back to the user and we say, hey, you’ve requested a rule when you’re building this new service, or if you’re moving a service to someplace that we’ve got to subnet let’s say for instance, that has been disallowed from communicating in this fashion and we’re going to fell it.

Josh:
We have already set in place rules that don’t allow this specific action. We also have the idea of passing. So if you create a rule this rule right here that says we’re saying any source to the quad eight across DNS and the actionist permit, then we’re going to pass it. So what that does is that takes that friction out of we have a CPO that uses this term all the time, and I really love it. He’s like, does it take the carbon based life form out of it?

Josh:
And so what it’s asking is if this matches, then what we’re going to do is we’re going to actually go and deploy it ourselves, we’re going to kick it down to the compute engine. Then we don’t have to review it. The next one would be if they were to have set up a service of any and not DNS, then we need to review this, right? So this is where it gets stopped in the process. And this is where your cab, this is where your SRB or whatever kind of process you have for kind of going through everything. This is where it will get involved and say this is where you go back to your requester, the other engineer, developer, and you say, hey, what are you doing? Why are you requesting this?

Josh:
And then you kind of dig into them and get a better understanding of it. So once it gets down, so in this example, we hit a pass. So nobody needs to review it. It’s not going to fail it. So it gets down to our compute engine. Our compute engine is what does two things. It does device selection, and then what we consider device abstraction. And this is how we’re able to provide you the ability of moving workloads without having to readjust any of the policies, how you can also retire a workload without having to worry about the idea that I will leave us an attack vector open. Once this is done, if I don’t go back through and change on firewall X, Y, and Z the policy that’s allowing it through.

Josh:
So the compute does two things. It does the device selection, the device selection will go out and it’ll know the routing table of all firewalls you have involved. And it looks at the routing table and it says, I know the trajectory of the packet based off of the intent that’s been built up here. So between the source to the destination, I know the trajectory of the packet in my network. Now that I know the trajectory of the packet, I can then go through and select the devices that are in line. Once the devices are selected, I will then build out the actual commands or the API calls or whatever way we’re actually digging into them. We’ll actually build that out per device. So let’s say in this example, we just have a Palo Alto, but let’s say for complexity sake, you have Fortinet, you have Palo, you have Checkpoint and you have ASA and then Juniper, and then you have some stuff in the cloud.

Josh:
Let’s say you have Azure in SGs that are actually inside some subnets in your cloud subscriptions. We would actually go through, and we would know the trajectory of that packet from on-prem through all those firewalls, into your Microsoft Azure instance. And we would say, okay it’s 888. It’s not in the Microsoft Azure, but just kind of go with me on this. So we would be able to say, I select all devices in line, and I’m now ready to push and commit to all devices in line, the change necessary, the very specific precise change necessary for this to take place for this intent take place. So once that compute has ran, it’ll kick it over to the enforcement engine and the enforcement engine will actually build it out and push it to the devices.

Josh:
That’s just a basic concept of how GPC is able to do that. How, whenever you initially make the request on the build of a server, how we’re able to take in the information, create the policy when you move the workload, whether it’s on prem to cloud on prem to another on-prem sub-net, whatever the case may be, how we’re able to kind of abstractly make sure that it moves with it. And then when you retire it how we’re able to ensure that, once it’s gone, we can pull back all of those policies and take them out.

Josh:
So this is the global policy controller, and what I’m in right now is I’m in a anti-virus application that we’ve built, right? So when you’re looking at GPC, what you see is you see a source, a destination service application, and action. We obviously in as you all know we have to differentiate the difference between service and application, because if you have a Palo Alto involved, it will know the layer seven. So it didn’t know how to get that far into the layer seven to say, you know what? I actually see gates inputs. And so it’s looking that deep into the OSI model, where a service, what we’re saying is your layer four, right? So what we’re saying with service is we’re saying that this is just your layer four concept of port and protocol.

Josh:
This goes back to what Randy was talking about. Is like yeah, yeah, we can talk a lot about HTTPS, but we want to give you the option that when we’re saying not just TCP 443, but we’re saying TCP 443. And if you have the ability to look deeper into it, to actually inspect that this is actually going to be gets inputs, or if it’s going to be in HTTP that we’re able to see those things that we know what we’re looking at because your Palo Alto or your checkpoint or whatever your case is, they’re going to allow you to see that. So what we have is we have source destination service application, very common. If you are a firewall administrator, if you’ve ever touched those, if you’ve done an ACL in your life, you get this, this is nothing esoteric. We’re not trying to reinvent the wheel.

Josh:
The idea is that when you build a service or whenever your customers build a service, your systems engineer builds a new server they want to be able to just be able to say, I know where I’m at. I know where I need to go, and I know how I need to get there. And we want to provide that simple ability to do that.

Josh:
So one other things, let’s just kind of break open one of these. So in our anti-virus application here, we have a rule that’s built to allow a bunch of tag objects. So this is kind of take it from building a server. Once you actually deploy and you want to build rules across it. So somebody will come in and they’ll create a rule and they’ll say, hey, I got a tag object. What these tagged objects are, are just a bunch of antivirus servers. This is our way of kind of doing a dynamic list to allow for all the stuff that is built, all of your services or servers that are built to be able to kind of dynamically be tagged. So that way they can be a part of the rule. This is very similar if you know Juniper SRX has address books. So the way Palo Alto do address books on a lot of people have the similar concept.

Josh:
And then we’ll do the same thing with the DMZ. So this is our DMZ sub-net. You can see it’s a slash 28, and then we have a service group and the service groups really just hosting one service and that’s HTTPS. We have this rule it’s ready to be pushed. You can see its pending enforcement, and we can see that it’s already been pushed out to one of our devices currently. So what we’re looking at here is this just shows GPC and how it’s built into the whole FireMon concept. Because FireMon has a lot of great ability for network monitoring for security policy monitoring. GPC is a way of actually saying, we’re going to give you that single pane of glass, that ability to see and build and distribute all of your policy through one single global policy.

Josh:
But we want to make sure that what we’re building is actually being fulfilled, is actually being done correctly. This is a part of that mechanism for us. And we use security manager, one of our tools, one of our flagship tools for pharma to do that. And that is once we push that rule, once we’ve made the commit, we are actually going to pull from security manager what has been pushed. So you can see here on this ASA, we’ve pushed a source this antivirus server. So we’ve pushed this source to this sub-net across the services. And we’ve got an action of accept.

Josh:
So the ACL has been pushed to the ASA. We probably have other things in line that we need to push, and we can see, we have a Palo Alto that setting out there that we’re waiting to push, and we’re actually waiting to push some rules and some objects out to that Palo Alto. This just shows you our abstraction layer and the way it really works. So this is a proof of when you say I have the intent that this tag group of servers and can talk to this sub-net across these services. Once it gets to our compute, it says, okay, I’ve pushed to your ASA. Now I’m waiting to push to your Palo Alto and here is what I’m going to push to your Palo Alto to ensure that your intent has been precisely managed and precisely deployed per the requirements of the intent.

Josh:
This is really exciting stuff, this is the stuff that always gets me really excited, especially the abstraction part. Let’s talk a little bit about the audits because we’re still kind of talking about when a server gets built up and gets initially deployed. We want to make sure that, okay, I’ll just talk candidly about it. When developers and DevOps, they’re really getting their hands into things and they have the idea of we can push fast and we can have a velocity that is very attractive to the business, and that there’s nothing wrong with that. From a security perspective, as a security engineer, when people talk high rates of speed on deployments, I get chills and I get knots, I have to say, wait, we have to stop, we have to slow down.

Josh:
Personally, my opinion and I’ve heard other people kind of express this opinion as well. That is really kind of the idea that brought around DevOps, that brought around you’re tying us up and you’re not letting us move business as fast as we want. So we can really get these developers to push things fast, the clouds enabled it and so forth. So we want to be able to get in front of that when we’re there looking at deploying and moving servers and even retiring them from the idea that this we’ve had a customer they do taxes for people, a lot of taxes, and they have a small season in their business cycle.

Josh:
And you can guess is probably January to April when they are running servers hot, they’re building crazy. Their cloud environments is expanding daily and they just have a demand that continually increases at a near exponential rate the closer it gets to April 15th. So what they’re coming to us and they’re saying is we need to be able to flex with that, and we need to flex securely because we need to be on demand, ready to go, growing business, but we cannot lag in security. Well, that’s great. GPC can enable that, but when you’re a tax company and you’re working in that industry you’re accountable to a lot of people for the security of the product and information of those people from PCI to normal NIST 800-53, and either other NIST standards, you’re auditing across a large spectrum of frameworks.

Josh:
So the idea is we want to be able to show what the compliance is. How you’re staying compliant with all the different frameworks you need to stay in compliant with in order that you’re able to in order to securely process and deploy and grow business and keep everyone’s information secure. This is one of the ways we have to do it. Obviously there’s not been a whole lot of traction with this one. It was deployed quick. And what we see is we see that when we want to push compliance, we want to be able to come back and say, Mike, when was this pushed? Right. When I build on labs, I usually don’t put notes, but we want to be able to see the notes of somebody saying I built this, or this was requested by this, but ultimately what we know is we know that the anti-virus application is the one that’s demanding this.

Josh:
So we put an idea of gates around the application so we can know the owners of that application. We know the approver of this application. So we know who to go to in the event that something might’ve been opened up or in the event that something gets denied.

Josh:
So this is one of the ways that we have of enabling the ability to show for an auditor, how we are putting barriers in front of the deployment process, so that we’re able to keep our standards without somebody roguery going in and putting it in. And in all honesty, it’s not usually row cases. It’s usually just that the paper standard of NIST, the paper standard of PCI is robust. I mean, if you go look at STIG, I’ve worked in the department of defense and department of energy. And we were using STIG in both of those. And you look at the STIG standard and it’s large and it’s a lot. And so you want to make sure that you keep those standards and even during your deployment phase of new servers, you want to make sure those standards are held up and you can prove that they’re held up, and that’s what we allow you to do here.

Josh:
So next, I’ll show really quickly, I’ll drop down a little bit more into that compliance because it’s a part of this, two of the main things that Randy’s talking about. It’s a part of the deployment phase, and I would say that’s where most of the heavy lifting of compliance goes, but here’s what sometimes is often forgotten. And I kind of briefly mentioned that in kind of a quick use case that I gave earlier. Let’s say that you have a workload that has Postgres in it. Some database connection to it. And one of the rules you’ve made is that you do not want database connectivity in a specific sub-net in your DMZ, no database connectivity allowed in there. Well, in the current environment and let’s say if you’re running, I think it was mentioned earlier in NSX environment, or if you’re quickly adjusting and throwing workloads across your enterprise. There could be a potential that you could throw something that is open to that database into a wrong sub net.

Josh:
So what we allow you to do in GPC and kind of what I’ve explained so far is that as the workload moves, we’re going to give you the ability when it moves from subnet one to subnets two or from zone one to zone two, it’s going to adjust the policy from zone one to zone two, to appropriately flex to the requirement of that workload. So you take something that’s demanding SSH, and it goes from zone one and zone two. And it’s going to say, well, I’m going to tear down that SSH policy in zone one, and I’m going to build the new SSH policy in zone two for that new host.

Josh:
Well, let’s say you get to zone two. And in this case we’re talking about database and that specific subnet and DMZ, you get to zone two, and it’s not allowed. So you want to make sure that in the event that you moved your workload and business is growing, we’ve got to get these workloads built, more taxes are coming in, or more tax forms, whatever it is. We want to make sure that when you move that, that it is not going to allow that port and protocol, that application to be opened up in that specific sub-net or zone where you’ve argued to find it not to be. We will do that through this compliance engine. And so the very simple case here is Telnet, right? We just don’t want to allow Telnet. And this is kind of what I’m showing here is that from any source to any destination, if somebody is saying except Telnet we fell it and we send back a message to the request or saying, you’re not allowed to do this.

Josh:
When if the workload were to move, we would kick back a message saying, hey you’ve moved a workload and we’re not going to allow Telnet. Especially if we have a more defined source and destination. We do this a couple of different ways because we know there’s some nuance to applications and all applications aren’t the same. So when you’re moving workloads and you’re building up different ones we know that you’re not going to have the same requirement from every different workload. And if you look we have many different workloads here. We have a file server, Demisto for threat hunting, our network utilities. So when we look at it, what we want to do is we want to say, we provide these global rules.

Josh:
These could be your PCI, your NIS standards. If you’re running anything in the network or in the enterprise that you’re just saying, I am not going to allow these specific things to ever be allowed. No matter what the application, Telnet will never be allowed. That is what you would put in the global rule. But in the local rule is where you get the ability to create the nuance. So as you’re building up these new servers, or as you’re deploying them to other sections across your enterprise these local policies will always follow them. And for instance, in this anti-virus application we have, we’re showing the rule that I showed you earlier. If any of these antivirus servers request this anti-virus server fast pass it, just let it go through. Nobody needs to click a button and say, it’s approved.

Josh:
We’ve already accepted as an organization we’ve already got together. SRB has agreed on it. We are going to allow you to actually have this approved and we’re going to have it deployed without a touch of a button, all you have to do is request it. Now, we can actually go and look across multiple different scenarios. And most of them are going to be that we’re just fast passing different application requests. In our global posts and this happens in an early like you would see it in a Palo Alto for instance, is that you would hit our global pre, you would hit our local, and you’d hit our global post. But our idea is that if you have got short global posts and we see HTTP being requested we’re going to say then you need to review this.

Josh:
And this is something I talked about earlier. This can happen on the move. So as things move and compliance reruns, it might get to a part of the network or into a zone where it says, well, you’re not going to be denied HTTP, but we’re going to have to review it. That is because we’re not really sure Google says, it’s bad. Everybody says, it’s bad. We’re not going to just let you go and build HTTP, any kind of ClearTax stuff out to end of the world. So we just need to review why your application might need it, because we know that there’s still going to be things that need to do this. And another idea, and this is something that Randy kind of spoke to earlier is just the idea of kind of creating the idea of whitelisting and being very precise in what we allow through.

Josh:
And so that means that we’re going to let the deny all, let’s say the implicit denial at the bottom of rules do most of the work. So let’s just white list and say what we’re actually going to allow. So that means that in NGPC. So when you’re deploying, if you’re ever deploying a deny, you could potentially disrupt services because we’ve only poked precise holes through for precisely what has been needed for a specific intent that is brought on by a business requester. So what we’re going to do is we’re saying if you’re ever going to deny anything, we need to have the conversation of why you’re denying it, because it’s just going to go against the philosophy of allowing us to white list and kind of precisely say, this is what we’ll do.

Josh:
Once you get everything built up we get a lot of conversations with this map. So let me kind of take a step back. This is our profile explore, and this will explore the application. This will show you everything that the application is. So all those enforcement points and connecting points that Randy’s talked about earlier, this is going to show you what they are. This is going to show you where they’re at, where they’re at is a relative term, but this is going to show you what they are, what they’re talking to and the current state of them. As you can see, these are all pending enforcement because what it is, I’m just waiting to push them across my Palo Alto. I think they’re all going to run across it. And I haven’t done that yet. And here’s a deny. So anything going to this quarantine, we’re going to deny, we’re going to quarantine things.

Josh:
We’re going to tag them with the quarantine, and we’re going to deny that things can get to them. That’ll get sent out enterprise wide. I can talk to that a little bit more in a minute. And so we can see all tagged antivirus server objects are going to be allowed to talk to this ASA DMZ in this internal tag here. We can drill down a little bit, see what the internal tag is, what all’s a part of it. We can see a lot of these /24 networks are a part of this. We can also go a little bit more and look at the access rules that are being used here. This specific access rule is this antivirus management access rule that allows these things. We can go and click on it, and we can dive back in to what’s going on.

Josh:
Not been pushed yet, we’ve seen that, we can go and look at device changes and we get it out. Yep. We got to push this to this Palo Alto. So that gives us the ability to really get a good idea of the connection points and enforcement points that an application needs, or that is dependent upon that application. Obviously this will grow and it will shrink probably based on your organizational needs, based on your business needs, based on just some technology needs maybe, growth in the product or whatever it is. But what’s important to note here and I get this question a lot. A lot of customers ask this and they’re like, where’s the firewalls. And I really want to drive this point home because I think a lot of what I’ve said really comes down to this.

Josh:
We’re showing you the intent and the abstracted layer around your application. So yes, there are many firewalls underneath all of this. There could potentially be 50 firewalls underneath all of this, but what we’re trying to show you is how do your applications actually talk and are they going to be allowed or not? We’re trying to relieve the idea of the complexity of this Palo Alto, this SRX, this ASA, this sidewinder, the Azure NSG, the AWS security group, all of those involved. We’re trying to take it and put the abstraction layer on top so that you’re able to have a simple deployment of your policies.

Josh:
So you can forget if what all firewalls this runs over. I say that in a very easy sense because, I mean, I get it as a security engineer, as a network engineer, you don’t want to forget, you don’t really want to forget what they are, but there’s an ease that comes with saying GPC has done this because here’s the truth of the matter. If this DMZ changes, if we add tagged objects that are outside of all of these subnets then that means other firewalls will be brought into the mix. If this anti-virus server changes and moves to a different spot on the network. Well, this doesn’t change. This map here will stay the same. So you can move your workloads all day long and you’ll still maintain the connectivity that we’ve defined and we’ve designed here, or I say we, but that you’ve designed that connect could be stays the same.

Josh:
So I just want to kind of go to, I think, one last part here. Just one last idea, and one more thought and that’s the idea of actually retiring services, right? So we’ve talked a whole lot about how you build, take a deployment, skit a server, or requests for new access to something, and you deploy it, you get it out there. GPC is able to actually really alleviate a lot of touch points, a lot of friction in the organization. And then how, if you are on-demand moving these things back and forth from your hybrid infrastructure to just maybe DC to DC, whatever the case is, how GPC is able to actually flex your environment to make sure that those are happening, but let’s say, and then the case I brought up earlier, let’s say that you’ve deleted workloads, right?

Josh:
You’ve deleted a server or a group of servers, or some pod of containers that you have. Once you’ve deleted that if you haven’t gone back through and deleted the access rules to it, I say your access rules, let’s say your device rules, right? So the rules that are on your firewalls. If you haven’t gone back and deleted those, then that vector of attack is open. You still have a vector of attack to actually come in and something’s open to get in on port 80. Let’s say that you replace it because the way a lot of cloud deployments are going is I might have a virtual interface and it’s 111. And I am destroying and building a bunch of instances and I’m just reattaching them to this virtual interface. And you might have different services on these different instance, for instance.

Josh:
What we want to be able to do is say that whenever the service goes away, the specific service goes away. The instance has gone, there was no longer a need for DNS to go to that virtual interface. We need to make sure that that access role is actually taking out of the firewall. That way we can ensure that we’re not opening up 53, that has no reason, or SSH on a instance in the cloud environment that has no need to have SSH built into it. So we have to be able to retire rules as the workload retires as well. GPC being the place that takes in all the rules and actually holds the access rules in place. As soon as that request or that intent is taking out or that object is taking out of that access rule we are showing we will be able to tell you that rule will be taking out of the device. So for instance, if I were to come in and I were to go to my tags referenced, in my anti-virus here.

Josh:
If were to take out any of these arms, click on the wrong thing. If I were to take out any of these servers here or this sub-net here. As soon as I took that out, it would no longer have an access rule in the devices for that to be accessed.

Josh:
So just to recap, we really wanted to show the idea of how GPC is able to taking a request, deploy the rules to the many different objects or the many different devices you have that have the access rules. How it’s able to, if your workload moves from zone one to zone two zone three, back to zone one that we can follow that workload through the different devices, no matter what they are. And that is because we can abstract and natively talk to your devices, but we have an abstraction map of your infrastructure and that we are able to retire appropriate device rules. We’re able to take them out of the device, lessening your tag vector surface, your surface attack. Get my words mixed up. We’re able to lessen that as you retire your workloads. So Randy, or Tim I don’t know if you all got anything else to jump on. I mean, I can talk about this 100 more minutes.

Tim:
I’ve got to say Josh this really I think it’s important for everyone to understand, as they’re looking at this, there’s probably heads probably spinning right now going this really looks cool. But it’s a collaborative conversation. We’re creating a collaborative platform for security orchestration here, that everybody can be a participant in. So your stakeholders, your business owners, your compliance teams, IT security, cloud security, infrastructure teams. With IT setting the guide rails, with the security team setting the guide rail so that if somebody does try to color outside the lines, as Josh pointed to earlier, with the any statement or something like that, then we can arbitrate and we can go back in and we can see, and then also getting rid of that unnecessary complexity, God, that is such a problem over time, we see that.

Tim:
On the average, when FireMon engages with a new client to help them get their hands around change and risk and compliance and cleaning up their environment, we find firewalls with 50%, sometimes 60% of the rules not being used. So being able to automate the hygiene of cleaning up that unnecessary complexity that creeps into the policies over time, that alone is such a huge value proposition and a benefit because these rules go in, but rules never go out.

Tim:
And so being able to automate that cleanup process is just gold as well. But it’s definitely, this is a paradigm shift from the way that we think about how we’re doing policy management today. And the only thing that I would say as a departing remark here or something I would leave the audience with is we have to start thinking about if we’re going to gain parody with the speed of business. And when I say we, we as the security team, if we’re going to gain parody with the speed of business, the traditional methods and processes that we’re using today are failing. They’re falling down. They’re not keeping up with the speed of the business. And so we have to look at how we’re going to gain parody with the speed of the business.

Randy Franklin Smith:
That’s awesome. We got some questions for you and folks, any questions you’ve got for FireMon please put them in the question window. Jonathan would like to know how is GPC licensed?

Josh:
Yeah, so GPC is licensed and we’re looking at an automation license scheme for FireMon where we are coming to a customer and saying, hey, how far are you ready to take your automation? So if you’re wanting to take it to where we have the full blown board, what I just showed you, we’re licensed at based off protected objects. So however many objects you currently have, or you believe you have, we will build the license based off of that. And so it just to better understand or explain the protected license it’s for every object that you use GPC to protect. So that’s what we’re licensing off of.

Randy Franklin Smith:
So it’s like how big your network is?

Josh:
Yeah, exactly. Right. Yeah. It’s exactly that. So how many objects you’re going to have involved with GPC actually protecting? Yeah. How big is that? We’ll, license off that.

Randy Franklin Smith:
All right.

Tim:
And not to get too far into the weeds, but I’ll just put it out there that the platform that it’s built off of is incredibly scalable. It horizontally scales. I know a lot of vendors say that, but that’s one of the things that we always challenge our customers with is to, we’re more than happy to prove it on how we work at scale. We deal with a lot of large enterprise organizations that have thousands upon thousands of access rules that they’re wanting to try and get their arms around. But it’s important that any solution you select, I think has a very open or the company has to have a strong commitment to an open API, but then also it has to be scalable to the size of the business.

Randy Franklin Smith:
Well, and that’s a question from Susan, what’s the form factors of deployment model? Is it on-prem software, is it on appliance, is it cloud-based?

Tim:
Can kind of be all of those. I didn’t prepare should I had a slide prepared that shows kind of all the wheels or how the cogs go together. But by and large, there is a platform, there’s an application server. There’s data collectors, there’s a database as Josh related to earlier. All of that can either be virtualized. They can actually be deployed on a purpose-built platform. But all the different components virtualized quite well. It’s not a service. I don’t want to say it’s not a service, although we do have a lot of MSP, XSP type clients, very large service provider clients that are using some of our technology behind the scenes to provide some of their value added services to their portfolio.

Tim:
You wouldn’t even know it’s FireMon, if you didn’t know it was FireMon. So that type of thing, but yeah, very flexible from a deployment strategic, it can be a unified deployment, or it can be a very distributed deployment as well.

Josh:
We have customers today running it in AWS, running it in Azure and running it on-

Randy Franklin Smith:
Okay. somebody is referring back to what I mentioned vulnerability scans. So do you have any integration into consuming vulnerability scan data and being able to prioritize things that need to be fixed?

Tim:
Yeah, we actually can take in vulnerability scan information from a number of different sources. We support tenable and Qualys and Rapid7. And what’s interesting about that because we bring in that vulnerability scan data, and then we correlate that to the policies that’s enforced.

Tim:
And so we kind of overlay that vulnerability scan data. So we have a number of different vectors that we bring into play to do that correlation and the annualization, right? I mean, we understand the compensating controls. We understand the route intelligence. We understand the vulnerability scan data that you’re giving us, you’re telling us what vulnerabilities currently exist on your network. So being able to correlate the access rules to say, hey, do I have any paths? Do I have any access to these no known vulnerabilities? We can even take it a step further to where we can. We actually have facilities for facilitating. We have the ability to basically simulate a bad actor coming in a well-known threat entry point. And then we can see how far that bad actor could potentially go within the network and then what vulnerabilities they could gain access to.

Tim:
And moreover, when they hit that, if it’s a root exploit, could they pivot off of that root exploit and go somewhere else in the network. So being able to expose that and visualize that gives you the actionable intelligence that you need to remediate it.

Randy Franklin Smith:
Okay. So Ignacio is wondering whether… He says testing firewall rules and proving that they actually block what they’re supposed to is really difficult. And do you have any way, since you have all the rules in there, can you simulate or verify that? Yeah, if you tried sending this kind of traffic, it would in fact get blocked or would it be permitted?

Tim:
Yes. We, definitely have that. We’ve had that for probably the last. Just for the benefit of the audience here FireMon has been around for a long time.

Tim:
We’ve been around better part of 14 years. So a lot of deep domain expertise in the security management firewall space, any enforcement space, I should say, because it’s so much more than just firewalls, it’s anything with a security policy that contains ACLs.

Tim:
But yes, you can absolutely simulate access port protocol services however you want to do, just to see if that can go. We have access path analysis, we have device path analysis. We have traffic based analysis for identifying if I went from point A and I want to go to point B, how do I get there? What’s available or not? So you can test the functionality and the behavior of your policy 100 different ways with our tool.

Randy Franklin Smith:
Cool. Well, I think that brings us to the end of our questions. This has been really cool. I hope you guys have enjoyed as much as I did. Thank you for spending time with us and we hope it was valuable to you. And thanks a lot for showing us your technology. What are the best ways to engage? This isn’t something you just download and try out, right? What’s the next step for somebody that sees value in this?

Tim:
That’s a good kind of final follow-up there, Randy, absolutely. You can go to our website at FireMon.com and you can request a demonstration where you can actually put your hands on our solution. We give you a license to use unlimited for approximately a month, but we’ll engage with you. We’re not just going to throw you feet first into the fire. We have technical resources available to help you minimize the learning curve so that you can productively evaluate how this technology works in your environment. So we’d love to talk to you. But yeah, go to our website. There’s all the different ways to contact us is there on the website. We’d be happy to engage with you, to look at your specific requirements.

Randy Franklin Smith:
Well, awesome. Well, thanks a lot guys. Thanks for showing us. It was good demo, Josh, and take care of folks. We’ll be in touch again soon. Bye bye for now.

Josh:
All right. Thank you all.

Tim:
Thanks everyone. Appreciate your time.

Read more

Get 90% Better. See How to Get:

  • 90% EFFICIENCY GAIN by automating firewall support operations
  • 90%+ FASTER time to globally block malicious actors to a new line
  • 90% REDUCTION in FTE hours to implement firewalls

SCHEDULE A DEMO