Are Firewalls Dead? Not by a Long Shot - But We Need to Make Some Changes

On-Demand

Video Transcription

Automated:
The broadcast is now starting. All attendees are in listen only mode.

Randy Franklin:
Good day, everybody, Randy Franklin Smith here. Today we’re talking about firewalls and the fact that all of the prophecies about the firewall being dead, just don’t have any feet to stand on. And it’s not because any of us here have some love or attachment to firewalls, but it’s just a fact that they provide a very important function and hopefully fulfill a need that isn’t going away anytime soon. And that is to protect resources from malicious traffic. And it’s an important layer of defense. But you notice that I said, firewalls can or hopefully do provide this protection. And that is one of the main things that we’re going to talk about today.

Randy Franklin:
The fact that that value is not always harvested, it’s not always realized from firewalls today, and in most cases, that full value is far from realized, and we’ll talk about why. But before we go further, I want to thank FireMon for making today’s real training for free possible. And, Tim, thank you. Look forward to hearing highlights from the reset report that you guys did on firewalls in the enterprise and the research there. And as always, you’re really awesome technology for managing firewalls.

Tim:
Thank you, Randy.

Randy Franklin:
Okay, so here’s our kind of agenda. We’re going to talk about the fact that we need more segmentation today more than we ever have. We also need deep analysis of traffic today more than we ever have, and we’re going to explain why on both of those in some detail. Then I want to spend some time talking about the fact that cloud resources tend to be overexposed to the internet right now. And so all three of those things, basically, build the case that firewalls are still extremely critical or crucial to cybersecurity today, and probably mean that we need more firewalls than less.

Randy Franklin:
But the problem that maybe we need to, not maybe, the problem we need to address upstream from all that is the fact that we aren’t getting the value from our firewalls right now. So before we throw more firewalls at our risks, we need to get our house in order. Okay, so I think I’ve made these points, or at least mentioned them, and that’s really all this slide is doing, we need more segmentation. And really, Tim, and I’m using the word new here rather than more. So I’m wordsmithing this, but we need new rules on the firewalls we do have just to get more segmentation. So I’m not saying more rules, because at the end of the day, there’s a ton of rules on most firewalls that should be deleted, right? But there’s a whole bunch of other rules that need to be added to firewalls wherever we already have them deployed to get us more segmentation. Thoughts on that?

Tim:
Yeah. I mean, we see a lot of overly permissive rules in firewalls today, which really means that you need to break that overly permissive rule out and it needs to become more discreet, targeting only those things that need access. And otherwise, we need to make sure that we’re only allowing what is necessary to meet the needs of the business. And you’re right, today, we find way too many unused rules, redundant rules, shadowed rules, and but the biggest offender is the overly permissive rules that need to be locked down. And if we’re going to micro-segment on a smaller level, we’re going to create tighter zones of control, then, of course, we need to get very specific about the rules that we create in order to enable that micro-segmentation.

Randy Franklin:
Yeah. And by the way, folks, please use the question window. We want your feedback. I’ll work as much of that in as we can. Now, here’s another thing that well, we’ll talk about that as we go along. Let’s just move on. So I think, to further this discussion, we need to acknowledge that there’s two types of firewalls, and I want to also level set what we mean when we say a firewall.

Randy Franklin:
So let’s start with the second bullet point there. You have the full-stack next-gen deep-packet inspection firewall products, the ones that are looking at traffic in both directions. So we’re looking at outbound traffic to detect data leakage or violation of acceptable use, stuff like that. We’re also of course, looking at inbound traffic to either detect attacks or to detect malicious content. And so along with that, then, these products provide us decryption.

Randy Franklin:
So we can look inside that SSL traffic, content scanning, maybe even sandbox detonation, the application of threat Intel lists, so that, oh, wait, why are we out bounding on port whatever to this IP address, which is a known bad IP address. And then, doing things like pattern analysis, look at all this DNS traffic, which the ratio of data out is way higher than the amount of data in. That would never happen with normal DNS traffic. That’s indicative of somebody trying to exfiltrate data over DNS.

Randy Franklin:
So those are your, I don’t know, if you have a term for these, Tim? I call them the full stack firewall product. That’s as opposed to anything out there that can be used as a traffic policy enforcement point. So that could be one of these full stack firewalls, but it’s more likely a router or a intelligent, managed switch. But it could also be these objects in the Cloud, like a network security group. And there’s all kinds of things in between, it could even be a wireless access point. Tim, you always call it anything with an ACL in terms of network access list rules. You want to talk about this just the two different categories, and do you see it the same way, Tim? What kind of terminology do you guys have FireMon for this?

Tim:
Well, I like the term used there, the full stack firewall, definitely. But we’re really talking about next-gen technology, or next-gen firewall, which is now-gen. We’ve been talking about next-gen for several years, and tracking next-gen adoption and everything. But we’re still calling it next-gen. But what we’re really talking about there is we’re just adding additional types of intelligence in the firewall, right? We’re putting additional technology in the firewall to block more traffic.

Tim:
So instead of just looking at source, destination and service, we’re actually looking at source, destination, service, content ID, app ID, user ID, things of that. So additional tuples. Instead of three tuples, we’re managing six tuples of information. But you said something Randy earlier that I think is important to hit on. And that was we need to get better at managing the firewalls that we have. Yes, we can add more rules, but until we start applying good firewall hygiene techniques, and good firewall management, we’re not going to realize the return out of the security investments that we’re making.

Tim:
You can have the best technology on the planet, you can have next-gen, next-gen, next-gen firewall with as much technology in it as you want. But if you’re not managing it appropriately, if you’re not enabling your people to leverage that technology appropriately, and they’re not well-trained on it and they don’t have the right tools, then you’re not going to realize the benefit of that additional technology. And this is coming at the top too, I was going to say that the aggregation of point solutions, trying to get more out of individualized point solutions or expanding the functionality of the point solution is driving the enhancement of firewalls as well.

Randy Franklin:
Yeah, and we’re going to talk about those issues a lot more. But I think the important thing right here is just to perhaps for some of us, expand our concept of what a firewall is and think about both of these types. And when it comes to segmentation, there’s many different things that you can use as a firewall. So enough said on that for the time being. But now we’re going to break down and just talk about segmentation. So this is just nothing more than the capabilities of any dumb device or object in the Cloud that allows you to filter by source IP, destination IP and source and destination port numbers.

Randy Franklin:
So that is very old technology, but there is still just an incredible need for using that kind of filtering more and getting more granular and more omnipresent throughout our network with rules like that. The reason is, is that right now, in most environments, if you compromise one endpoint out there, you now have network access to just about every other system on the global network. And when I say network access, I mean, the ability to originate a packet on that compromised endpoint, and send it, get it delivered to the IP address of some other system or device out there on your network. And that right there is a risk and an exposure that needs to be understood, and it’s overlooked so many times.

Randy Franklin:
This is going back a long time, but I still see the same sort of misapprehension going on. I was doing an audit of a large insurance company one time, and they have a network of agents. And then of course, they have all of their internal servers and everything else on their network. The problem is, what they had done is they had divided the systems and all the endpoints that were owned and managed by and co-located with these 10s of 1000s of agents in one Active Directory domain, and then they put all their internal servers in another Active Directory domain, and they thought they’re secure.

Randy Franklin:
But the problem is, Tim, they were still on one big happy IP network. And there was nothing there to prevent some agents endpoint from sending a packet to any server at all on their internal network, stuff that they should never be accessing. Because I mean, the agents were only accessing two or three different line of business applications and think about everything else that a global fortune 500 company has, yet they could send IP packets to the systems. And to me, it was very evident the risk, but it wasn’t everyone else could go on like, well, it’s in a separate domain. So that’s just one example, and it’s an old one, but I still see the same kind of thing going on today. Any quick thoughts from you on that, Tim?

Tim:
No, no, you’re absolutely right. I mean, it’s important, and we’ll talk about this later too. It gets back to visibility, as far as where we need to enforce and where things are getting through. But today, your right. The audience here will no doubt at some point in the future hear the terms east, west, or north and south and east and west, but that’s really what we’re talking about. We’re talking about limiting that range of motion inside the network as well, not just outside, not just from the inside out. And the perimeter has grayed, the perimeter lines have blurred. We all know that. But still, we need to understand, what our zones of control should look like inside the organization, not just outside?

Randy Franklin:
Yes. So two key terms that Tim used. Folks, I want to draw your attention to zones of control and range of motion. So to me, and maybe it’s not a very likable or pleasant analogy. But I think that there’s, well, let’s just put it this way. So who says that a corporate network Tim should be modeled after a free and open society? Whereas most of them are. To me, your internal network should resemble more like a police state, where to go very far at all, you’ve got to go through checkpoints, and you’ve got to have travel permits. And that’s the right way to do a network.

Randy Franklin:
You can start with things like east, west and north, south. But we can keep going deeper and deeper toward the ultimate is micro-segmentation. But most of us are not anywhere close to micro-segmentation, we just need more segmentation. Okay, so let’s give some more recent examples of why this is important. Think back to the famous well known hacks over the past several years. One of the most recent ones is the Capital One hack involving AWS stuff, then there’s before that, the next one that comes to mind is Equifax, before that, Target. So these are just off the top of my head, but all three of those would have benefited with more segmentation and traffic policy enforcement.

Randy Franklin:
So every single one of those. With Target, it was a HVAC vendor being able to manage the HVAC systems in all of these stores, but they could access more than the HVAC systems, they could access more than the systems they needed to, the bad guys were able to jump from that entry point, from managing HVAC systems to the systems actually running the cards basically at the cash registers. So that there’s a perfect example of how segmentation would have helped. Equifax, so the bad guys broke into just one single unpatched Apache server, but then they were able to access downstream database servers. And again, there was supposed to be segmentation in between there, but for other various reasons, that had expired.

Randy Franklin:
Capital One, again, that one it looks like an example of where S3 Storage was accessible from the internet, and it’s not like this is storage that is serving up front end customers on the internet, this looks like, well, not looks like, it was. Internal database storage of PCI data and whatnot. Well, not PCI data, but personally identifiable financial information.

Randy Franklin:
So, again, segmentation, we just shouldn’t be able to send a packet from any point to any other point, because once we do that, we are now relying on the authentication authorization of each individual resource. I’m going to talk about that more. Tim?

Tim:
Yeah, I mean, the Target example you give it’s a perfect scenario thereof, while there was a firewall in place, and we don’t know this for sure, but I’m suspecting that the firewall that was in place allowed unnecessary access. So there’s no reason, right? There’s absolutely no reason that there should be access from your HVAC network or your temperature monitoring network to the PCI network, to the point of sales network. But yet your right, that’s the path that the nefarious individual took in order to plant their malware, and then grab all the credit card information that was out there.

Tim:
Same thing with Equifax. And I think Equifax also was found they had technology that wasn’t supported by the vendor, meaning that they weren’t able to bring it up to the right patch levels and things like that. So there’s always, there’s a problem. You always put yourself at risk if you have equipment on your network, that the actual native vendor that sold it to you can’t support it anymore, because it’s went past its life expectancy. And yeah, in the Capital One where they used a server side request forgery. I mean, there was a web service gateway there that was misconfigured from a firewall, and I’m going to talk about that a little bit also when we get to it. But all of these are examples of, if you don’t have a way to look at how your policy behaves, and you don’t have something that can perform behavioral analysis on that firewall, these rules have become so complex and the firewall policies themselves have grown too large, that it’s just not humanly possible to evaluate policy behavior without some form of automation.

Randy Franklin:
Yeah. So, John says, sorry to be daft. But by segmentation, are you talking about total isolation or protocol isolation? He says, in our environment, you can’t RDP into our domain controllers, but expect DNS and DHCP to all work. So John, segmentation is a spectrum or a continuum, sorry, it’s a content continuum. At the permissive end, every IP address can communicate with every other IP address from and to any port. At the strict end is micro-segmentation, zero trust, where every single device, we know what other devices, what other IPS it needs to talk to, and what ports on those systems it needs to talk to, and that’s the far extreme.

Randy Franklin:
No one is all the way there 100% with micro-segmentation. They may be between their VMs, or something like that for a limited set of systems. So we’re all somewhere in between those two extremes. And what we want to do is, is follow least privilege. And it’s not least privilege as with regard to what human users can do, or what programs can do, its least privilege from the point of view of devices on a network, which systems can they communicate with? And what types of traffic protocol level can they use to communicate with those systems?

Randy Franklin:
Okay. So that’s the current problem is we’re just way, way too far on the permissive end of the segmentation continuum. Here are areas where we need more segmentation. So between application tiers, and let’s say we have a web front end server, which talks to a database server, that’s about the simplest, multi-tier application profile you can have. So the web application server is exposed to the internet on Port 443, maybe we have a next-gen firewall in between that, which is tearing down the IP addresses and, sorry, tearing down the HTTP requests, and reverse proxying them and looking for cross domain and SQL injection and all that, maybe we don’t.

Randy Franklin:
But anyway, regardless, the web application server has to be exposed to some degree to the internet in order to receive requests. Then the one application server needs to talk to the database server, if it’s SQL Server, it needs to be able to talk to it probably on Port 1433. But let’s put it this way, should the web application server be able to RDP or Secure Shell into the database server? And when I say into, why don’t I change that preposition to at? Can it RDP at that server? Should it be able to Secure Shell at that server? Should it be able to WMI at that server? What do I mean? What’s the difference between at and into? I’m making no statement at all as to whether there’s any account that has access to login via RDP, I’m just saying should that web application server be able to send packets to port RDP 3389 on the database server? And should the web application server be able to go past, yeah, that SQL server to other stuff on the network?

Randy Franklin:
Now folks will say well, that’s what a DMZ is for. Sure, but what we found find is that even with stuff in a DMZ, there’s still not adequate segmentation between systems further inside the network. And yeah, that’s kind of my point there on that bullet point. But that’s just one place where we need more segmentation. What about zones that have different trust levels? What are some examples of that? Well, you have different areas where end users are connected. So end user workstations. That is a very different level of trust than your systems in the data center, as just one example.

Randy Franklin:
But there’s plenty of others than that. Maybe, kind of closely related to that is if you get into SCADA, and I just can’t think of the other acronym right now. But it’s basically systems on the factory floor or for plant control, or stuff like that, that’s a very different level of trust, and should be a different zone. And there should be highly controlled routes allowed traffic between that trust level and somewhere else. And here’s just another example. Most organizations have phone systems.

Randy Franklin:
I was amazed at a few organizations that I recently talked to, Tim, that all of those VoIP phones are on the exact same network as workstations and as servers. And so we have a whole nother population of endpoints that could be compromised. And these phones have a massive amount of functionality in them. They’re just a great staging area for bad guys. They’re basically running some kind of Linux operating system. And it blew me away that your average user if they wanted to, could Secure Shell in to their phone or somebody else’s phone. And likewise, those phones could outbound TCP connections to workstations and servers that are just so far outside their universe of stuff they need to talk to.

Tim:
Yeah, the threat landscape as it relates to what can actually be compromised now has just so greatly expanded, right? I mean, when we think about IoT, and you were talking about SCADA and industrial IoT, as well, just almost everything is connected in some form or fashion to the internet nowadays, or has some type of a IP stack, and of course, that makes it a target. And depending on the level of sophistication, or depending on the level of security that went into the operating system for that IoT device, or for that IP attached network device, once it’s compromised, and you don’t even know. I mean, a nefarious, a bad actor can get on there and set and watch other attacks.

Tim:
Like you said, once they’re in, they’re in, and then they just try to ferret their way through the network to find out where else they can go. But it gives them a launch pad, and it gives them a way to further penetrate the network. So it’s a scary proposition right now, because there’s nothing that really mandates the level of security sophistication for a lot of the IoT technology that’s being deployed.

Randy Franklin:
Yes. And then think about this user endpoints and back end servers. So think of some of the internal applications we’re not even talking about cloud, let’s say you have SAP R/3 running, your end users need to be able to talk to your front end SAP R/3 server, but what about all the servers behind it, storage servers, database servers, and integration servers that tie in with other systems? Can your end users ping those systems? That’s just yet another example of where least privilege and segmentation come into play. And it just goes on and on from there.

Randy Franklin:
But the overriding principle is ideally, the Holy Grail at the end of the continuum is, every IP address can only communicate with the other IP addresses it really needs to. So anything it tries to talk to that is outside of what it really needs, least privilege, would be blocked. Let’s see here. Now let’s pivot and talk about cloud. I think this is pretty major. And we are starting to see hacks that prove that those of us that have misgivings about this are right. So here’s my posit or whatever. Cloud resources are currently overexposed to the internet. So pretty much anything that you spin up in the Cloud, whether it’s AWS or Azure, Google, by default, it’s going to be accessible to the internet.

Randy Franklin:
If it’s a VM, you get an internet IP address that you can Secure Shell or RDP into. If it’s storage, it’s out there, there’s a URL, and anybody in the entire galaxy can access it, if they know that URL and have whatever the authentication is, whether it’s a storage key, or a token, or whatever. That’s not a good thing. And we’re starting to see that proven. Because what that means is, now every IP on the internet can test that authentication. And there’s only one thing protecting that storage, or that virtual machine, and that is the authorization, the storage key, the token, the password. And so here, what you’re seeing is some, let me get my pointer out here, some Amazon EC2 instances, basically, those are virtual machines, and then you’ve got some S3 Storage buckets down here. And this is all part of some application, and the virtual machines need to access that storage.

Randy Franklin:
Also, of course, you’ve got your data center that probably needs to upload data to that storage. So it’s accessible to these VMs running in the Cloud, or maybe it needs to pull data down from them. But by default, we just do all that through the internet. And that means this bad guy over here, if our developers do anything wrong with the storage or the EC2 instances, then the bad guy can access that and perfect case is Capital One. That weird little issue of their web application firewall, and the way that the metadata service works in AWS, but it was just child’s play to get a authorization token to be able to access that storage.

Randy Franklin:
Now, those things are going to happen, those kinds of misconfigurations and mistakes by developers are going to happen. But if there had been a firewall here, and remember, folks, when I say a firewall, it may just be a set of rules in the Cloud, in AWS or Azure, it’s not necessarily a Palo Alto firewall, okay? But if there had been some segmentation here that says, look, the only folks that need to access the storage bucket are these EC2 instances and the data center, then that one mistake by the developer would not have allowed the hack that took place.

Randy Franklin:
So it needs to look something more like this, that what is actually accessed from the internet is exposed to the internet, and nothing more. And you can do that with any of the Cloud technologies. But it requires some extra work, because security always requires extra work. And Gartner agrees with this is it, where do I have, yeah, I’ll just jump ahead for one second. Gartner’s saying, 50% of organizations have unknowingly exposed infrastructure as a service to the internet. So yeah. Any thoughts about cloud resources and overexposure to the internet, Tim?

Tim:
It’s Wild Wild West out there right now as it relates to cloud. You said it earlier, when we when you first opened up, you said once it’s placed on the Cloud right now, once it’s placed out there, there’s almost immediate access, right? So the threat, the probability of risk has really widened as we’re deploying things into the Cloud. And the probability of deeper penetration or compromise of that data is deepened also as opposed to something being exposed on premise. And we have hackers, we have bad actors out there that are just routinely daily scanning for public IP addresses that have data exposed.

Tim:
So if you put something out there, and it’s not secured correctly, then the chances that somebody’s going to find it are incredibly high. It’s not will they find it, they will find it. And so I go back to, as it relates to applications, resources, assets, services that are being deployed in the Cloud, organizations need to spend time defining the security parameters, what the security of that resource needs to look, right? Who wants access? Who should have access to that? Who shouldn’t have access to that, and how deeply I need to protect it? And then I need to test that implementation too. I need to define my security, intent, my security strategy around that asset, and then I need to make sure that that’s what exists, that the security implementation around that asset resource service is exactly what we believe it is. And I think that’s the big breakdown right now.

Tim:
The people that are performing a lot of the security configuration of the application resources and services that are being deployed in the Cloud, aren’t necessarily same individuals that are managing the firewalls on-prem. And we’ll talk about that later, too. But it’s a big concern that we have.

Randy Franklin:
Yeah. And Kevin, I’ll get to your point here in just a little bit. So stay tuned. In fact, this will probably be a good point, right here. So here’s another trend. And this one speaks to the more sophisticated full stack firewall and the need for that. And that is that attacks are stealthier than ever. Attackers are hiding their command and control and exfiltration traffic deeper than they ever have. They’re hiding it inside of DNS, they’re even using things like time synchronization protocols, and certainly they are encrypting it. And so on, I often talk about that having network visibility isn’t enough, we need to have visibility on the endpoints.

Randy Franklin:
And so that means, log collection of endpoints and EDR agents running on endpoints because there’s stuff that you could only see if you are on the endpoint. And that is very true, and I believe in it. But let’s also be honest, and admit that we’re always going to have limited visibility on many of our endpoints. There’s going to be endpoints where we can’t install an EDR agent, there’s going to be endpoints that we just are not getting logs from. And so that is good enough reason right there to say, well, we need to have sophisticated analysis of traffic at the network level so that we have visibility there. But let’s also admit that even if we have 100% visibility on 100% of our endpoints, we still need multiple layers of defense, because any one layer of defense can fail.

Randy Franklin:
So the bottom line is, we need to be able to watch what’s happening on the network, and we need sophisticated technology for doing that because of the level of stuff that attackers are going to add into that encryption. So the bad guys are encrypting, plus, there’s a push to encrypt lots more of the traffic on the internet. So Google really pushed and was successful in getting websites to move to HTTPS. But now there’s push underway to start encrypting DNS data too. And it’s a double-edged sword.

Randy Franklin:
So it protects some data, but then it makes it more difficult for us at the enterprise point of view to be able to police what is being transported throughout our network. Now, that is, again, where next-generation firewall technology comes in with decryption. And there’s even dedicated SSL decryption products for organizations needed at scale, and that is extremely valuable. So you basically are a man in the middle, you’re decrypting the data, going in either direction, inspecting it, and then re encrypting it.

Randy Franklin:
But, of course, there is the problem with certificate pinning. And when applications pin certificates, then that breaks SSL decryption, and there’s nothing you can do about it. So then at that point, you need to make a decision about whether you’re going to allow opaque traffic to flow. And that becomes the criteria for that then is, well, what’s the destination? Where’s it going to? What do we know about it? And can you get some of this stuff through IDS monitoring? You can. But we need real time response capabilities. So, let’s see. Jim says, regarding stealth, a good next-gen firewall will have protocol inspection. I’ve seen non-HTTPS traffic over 443 and dumb firewalls, missed it.

Randy Franklin:
See, there’s a perfect example of what I’m talking about with regard to sophisticated, network traffic analysis. So Kevin had said next-gen, and segmentation relies on inspection of traffic, but there’s big drive for data encryption that effectively bypasses the next-gen features. So that’s true. Again, just to recap, Kevin, that’s true. If we don’t have SSL decryption, most next-gen firewalls are going to have that. They may or may not be able to handle it at scale. There are SSL decryption technologies that will handle it at scale. But then certificate pinning defeats that. And so now, we have to determine, are we going to allow a pinned traffic, which remains opaque to us to or from those IP addresses? And John says, even with encryption, couldn’t you fingerprint DNS exfiltration by the sheer volume of DNS requests? Yeah.

Randy Franklin:
So again, even when data is encrypted, there’s still analysis that can be done. So these are all things that speak to the full stack firewall technology and deploying that at places where it’s needed. But here’s the thing, just throwing more firewalls, more network, traffic enforcement points, more next-gen firewalls aren’t going to solve the problem, because the sad truth, the skeleton in the closet, is that the care and feeding of the extent firewalls we already have is in bad shape at many organizations, and that’s because of a lack of automation.

Randy Franklin:
Organizations deal with tons of change requests to their firewalls. And just to prove that the automation isn’t there, and just to prove the magnitude of this problem is the large percentage of those requests that are not solved the first time around that require a rework due to some kind of inaccuracy, or misconfiguration. So think about that. If a large percentage of firewall change requests, break something, and we noticed that, we notice if legitimate traffic is blocked, how many of them are opening up a pathway for malicious or at least unauthorized traffic? We’re not going to see that. Why? Well, first of all, nothing’s broken that a user is going to complain about.

Randy Franklin:
Number two, most organizations do not test firewalls for blocking unwanted traffic. All the testing is around whether the traffic is allowed. On top of all that, the change management process is ad hoc email requests, spreadsheets, junk like that. Add on to that all the complexity. A third of the organizations that you interviewed, Tim or surveyed have over 100 firewalls, and most of them are split between multiple firewall vendors, and then there’s multiple teams. So you’ve got all that complexity.

Randy Franklin:
So when you take the quantity of firewalls and the quantity of rules and the quantity of change requests, mix in or add all of the complexity, and then if you subtract automation, then you are set up for failure. And Gartner here points out two key factoids. So basically, all firewall breaches are due to misconfiguration, not flaws in the firewall technology. On top of that, we have that other statistic we just talked about, 50% of organizations in 2018, unknowingly exposed stuff to the internet.

Randy Franklin:
So firewalls, if they’re not managed, and if we’re too afraid to change them, if we’re too afraid to clean things up for fear of breaking stuff, then it just all breaks down. And you really have to begin questioning, is it even worth having this firewall in place if we’re too afraid to change it? So what we need is automation and centralized management of firewalls, centralized one pane of glass view of the configuration of all of our firewalls, of the ability to express our intent, and then have that translated into the specific cryptic rules of each firewall vendor.

Randy Franklin:
We need to be able to know if we make this change, what is it going to break? So what are our current traffic flows? We need to be able to do what if in other words, and there needs to be workflow and change control over firewall changes. If we do this, then hopefully we can meet goals of eliminating misconfigurations, speeding up changes, becoming more agile, and being able to add more segmentation in order to slow down attackers or just stop them in their tracks. And we need to be able to confirm that unauthorized traffic is actually blocked. And so this is stuff that you talk to a lot of organizations about with putting together your state of the firewall report. And this is the kind of technology that you guys built. So Tim, you want to take it over.

Tim:
Randy, thank you very much. Yeah, this is the fun stuff. So let me get my PowerPoint up here. And let’s-

Randy Franklin:
John’s…

Tim:
… Sorry.

Randy Franklin:
So I just want to mention someone else’s point. Alistair says that he’s looked it up, and they’ve had 433 RFCs for their firewalls just this year, since January. So case in point. Alastair, thanks.

Tim:
So this is, we do this each year. Randy, can you see my screen okay? I’m assuming everyone can see that. So want to make sure the audience-

Randy Franklin:
We’re not seeing the slideshow. We’re seeing like the whole PowerPoint thing.

Tim:
… All right. That’s not what we want to see here. Let me change that.

Randy Franklin:
That’s more like what you want right there.

Tim:
There we go. looks better.

Randy Franklin:
Yep.

Tim:
Awesome. So we do this each year. We run this survey side of the firewall. We do a couple of surveys. One is we do the state of the firewall. Then we also do state of the hybrid cloud that we’ve started tracking. But the exciting thing about the state of the firewall is this is our sixth year of doing it. So the data is starting to get better. It’s also starting to expand. We had almost 600 participants this year. And so I’ll talk a little bit about the survey demographics. So we’ll give the audience kind of a flavor for who we talked to and who was responding to this survey. And of course, we’re not going to go into the nitty gritty of every single question that we asked.

Tim:
There was about 30 something questions on the survey, but we’re going to bubble some of this up and look at the things that we think was illuminating. So you can see here that. This was another good data point from this year’s survey because we had a number of C-level executives participate in the survey as well. And so we’re kind of getting both from a hands-on perspective, but then also from the management perspective, at the top, at the C-level view, some insight to how they view what’s going on inside say, the firewall as well. And then pretty good mix of company sizes as well. And probably everybody here knows, I mean, it seems like the bigger the networks, the larger the enterprise, the more complex, the more messy things can get. It’s very interested.

Tim:
For those that aren’t familiar with FireMon, we’ve been around for over 15 years. So a lot of deep expertise in the security management space dealing with very large enterprise accounts as well. Some of these things were not surprising to us, and then some of the things were surprising, and I’ll highlight some of those. But I’ll add a little commentary here. But this is not me making up the data, this is coming from you guys, this is coming from the field of the users that we talked to. But I’ll try to add a little experience, color to it, as we go through and look at some of these statistics that we found.

Tim:
Mainly, this was North America. So I just want to make sure that everybody understands that. Although, we’re a global company in nature, and we did do the survey from a global perspective. But predominantly, most of the answers came from North America. And you can see here, a lot of IP services. If you look to the industry stats on the left, as far as where the people kind of set from the different market verticals, you can see a lot of a lot of the participants come from the IT services area, but we see retail and finance and energy and education. And this is a good reflection of FireMon’s customer base too.

Tim:
We have customers in every single market sector. And I think that speak to our product value, but more importantly, it speaks to the common challenges across all these different market verticals as well. But if we look at some of the major themes that surfaced from the report, one was lack of automation, we’ll talk about that, network complexity, Randy talked about that and hit on it. Randy also talked about visibility, that came to the surface as well. And the continued importance of firewalls as we talked about them not going away anytime soon. So we’ll hit on that, and see what everybody had to say about that.

Tim:
Now, there’s a lot of data here on the right, I’m not going to ask you to try to decrypt all the colors there. But I will tell you that the blue and the orange kind of combine, number one, because this is challenged, very challenging, or somewhat challenging, so challenged in general, from the different things that they responded to all the way over to the yellow that they said they didn’t find challenging. But we kind of, we surface these top five up, and you’ll see that the dependency on legacy technology came to the top, keeping the environment secure doing transformation.

Tim:
So you think about the number of companies that are kind of going through their cloud first strategies or their digital migration strategies, and trying to get both brownfield, so existing applications that they have currently on-prem that they’re trying to put into the Cloud or trying to migrate to the Cloud. But then also we have greenfield, so we have new applications that are being deployed into the Cloud at an alarming rate. And so they’re struggling with some of that and making sure that they’re secure in the process, as they go through that migration.

Tim:
They’re also, we surface to the top, and we’ve seen this firsthand from our client base as well to the lack of integration across the security tools, meaning that I have some tools I can use on-prem, but they don’t necessarily translate when I get to the Cloud. And so they start looking at other things. Well, I’m going to have to round out my inventory of point solutions in order to get better coverage. And of course, then that gets back to the C-level, and they’re saying, hey, before we go off and spend more money, I need you to help me quantify the return on the value that I’m getting from the existing things that we have. And so at the C-level, I see them challenging the use of the number of solutions that they have in their repertoire today, to say, hey, we need to either aggregate some of these, consolidate, but right now cost is a concern because we have so many tools, and no one’s able to clearly quantify the return.

Tim:
And this isn’t just at the network layer, as we think about firewalls, we’re really thinking about network security. But I mean, from a tools perspective, I’m talking about at the desktop layer, at the server layer, at the database layer, at the data center layer, at the cloud layer, even going up into the Cloud when we think about edge security, and CASB-type functionality as well. Lack of integration. So lack of the right people. We definitely see this kind of as a manifestation of some of the problems that people are challenged with or experiencing because there definitely is a shortage of cybersecurity expertise out there today, a shortage of cybersecurity roles and skill sets in the right positions and people trying to find those right positions and fill those right positions. But then we also see that the resources within the organizations are being stretched too thin also.

Tim:
We hear that directly coming from the field, meaning that either I have my best people doing a lot of reoccurring tasks, and they have a lot of things on their plate, and they’re not able to get to some of the higher skilled activities that I need them to, because they’re bogged down with too many cycles in the day doing things that are too mundane and frequently reoccurring, yet has to be addressed in order for the business to move on. So that’s a struggle. And then sometimes internal politics across the environment silos of communication. But what’s interesting here, as you look at this chart that I would draw your attention to is, imagine these don’t just stand by themselves, these challenges don’t just stand by themselves. But if you combine these challenges, they stack and the effect, the impact that they have on the business can stack as well and multiply.

Tim:
So you think about, I have reduced visibility in my hybrid environment, and I have a lack of skills and integration across my tool set. So imagine combining those challenges together. And there’s no wonder that we see some of the problems that we see taking place in the market today. And the headlines that we see coming to the top, right? It doesn’t really surprise us that we see misconfigurations. And this aligns with what Randy was talking about also, and what Gartner has said. I mean, it doesn’t take much of a search to come back to see the number of data breaches that are going on. And this goes back to my point earlier that I talked about bad actors going out there just scanning for public IP addresses looking for open access to data that is getting put out there weekly, right now.

Tim:
So it’s a big problem that we don’t clearly have our hands around right now, and I don’t think we’ve seen the last of the big data exposures, very much like the one that we saw it at Capital One. But what’s even more interesting here is some of these breaches, it’s what I call the tale of the breach. So it’s not just the initial impact that has to be addressed by the organization that’s been breached, but it’s the fallout that they have to deal with after the fact. In the Capital One breach, we see that the Senate is involved now, and that they’re going after Amazon, as well, and GitHub, and trying to make them responsible for what they seen. And they’ve accused Amazon of selling broken technology, and there’s all kinds of ugly things being said about that.

Tim:
We see that Capital One made a swift move to reallocate the CISO, although he still remains on board and will be acting to handle some of the fallout from this breach. We also see lawsuits being levied against Capital One, and possibly Amazon, and GitHub as well, and there’s just all kinds of things going on. The typical tale of a breach when it takes place is anywhere from three to four years that the company that’s breached has to concern themselves with. And so that’s a really long time, and it’s also very expensive, and not all companies can sustain the impact of a breach. A lot of them can be put out of business.

Tim:
I thought this was interesting. Randy, I just put it up there because it was a statement by the CISO of Amazon. Again, this was probably in response to the senators asking the FTC to investigate Amazon. But sometimes humans make mistake, and they do. It’s a very true statement. But how do we help the human factor? How do we reduce the human factor? How do we remove the human element? And I’m not saying replace, I don’t want anybody to misunderstand me because when I talk about automation, we’ll talk about some of the stats that surface up to the top here from the report, it’s not removing the human element, it’s helping the human element become more efficient and more consistent at what they’re doing.

Randy Franklin:
Yeah. We don’t excuse breaches because humans make mistakes. We have controls in place because humans make mistakes.

Tim:
Right. Human error is a big issue for misconfiguration. But we see these, and Randy talked about these as well, and this was definitely something that surfaced out of the report was just the number of misconfigurations that cause an impact to the system. 36% of the respondents said that somewhere between 10 and 24% of the changes require some type of rework. And then you look at, and we’re going to look at that a little deeper too, at the number of changes that they’re faced with on a week in, week out basis.

Tim:
But this was one thing that did, and there’s not a lot of things that surprise me, having been at FireMon for 12 years, and helping a lot of companies try to better manage the security technologies that they have in place, augment the technologies that they have, and helping their people to become more efficient. But this one jumped up for the first time in the survey, the lack of automation, and it’s at a pretty high level. And so what surprises me about this is because of the job shortage or the cybersecurity shortage skill sets that are out there today, and the fact that we’re not adding people, yet the complexity of these networks are continuing to go up into the rise. And you talked about it, Randy, from a rules’ perspective, the sheer volume of rules that they are faced with managing within the infrastructure is continuing to go up.

Tim:
But yet we see the resources necessary to manage that increase in complexity, that increase in application deployment, that increase in the sheer volume of rules, we don’t see that going up at all, we see it remaining stagnant. And so if you’re not adding people, how do you address this level of complexity rise? And I think the answer is automation. But clearly, based on the responses that we received from the survey here, we can see that automation is not being leveraged as widely as it should be.

Tim:
This gives a little better breakdown on the number of changes for a company size. Imagine if you’re down in that 2% over there to the right, greater than 500 changes per week. It’s just, I can’t imagine processing that many changes. Moreover, just the number of people that would be involved in that. 45% of the respondents said they processed between 10 to 99 changes a week. So that’s kind of the blue and the orange that you see there. And how much of those were manual in nature, as far as the change. And when I see manual, I’m thinking manual, it’s the amount of cycles that’s required to do that, too. But moreover, how many people are involved also? It’s not just one person, many times it’s two, or three, or we’ve seen review teams as much as four or five people that have to review the changes before they go in place. And this introduces another issue.

Tim:
As long as we hold on to some of these processes that give security such a bad name of being slow and unresponsive. You tell somebody no long enough, you tell somebody to wait long enough, and they look for a way around you. And we see some of that taking place with the applications that are going into the Cloud today. We see business owners and stakeholders and DevOps and other entities taking responsibility for implementing their own security controls for the application assets and resources and services that they’re deploying in the Cloud. So it’s no wonder, and it’s not that they’re not smart people, they’re incredibly smart people. They’re smart about the things that they’re experts on, but they’re not necessarily security experts. And so they’re deploying these applications in the Cloud, and they’re not necessarily getting the configurations right, or they’re misinterpreting what they’re responsible for, versus what the Cloud provider is responsible for. And so again, we go back to, it’s no surprise that we see misconfigurations in some of the things that we see hitting the headlines, and making the top headlines today.

Tim:
Network complexity and visibility is another area that surfaced here. Randy talked about visibility as well. And again, you don’t have to go very far with a network search or with an internet search to see the number of analysts and journalists that are reporting on that. But it surfaced in the report as well. They’re still number of firewalls that are going into the Cloud. So what’s interesting here is that the breakdown of how many firewalls I have on-premise and the number of firewalls that I’m also placing into the Cloud, as well. And I wonder, it doesn’t really say here because we don’t have the opportunity to further interview the people that responded here, but think about, there seems to be a lack. And maybe justifiably, although I would defend the Cloud providers, the public cloud providers that they are definitely getting better.

Tim:
They’re adding more functionality, more security functionality to the native controls in the Cloud. But yet, we still see a large number of firewalls going into the Cloud, to augment whatever native controls are being provided in the Cloud as well. So there’s either a lack of trust there, or they’re unsure, or they want that same level of granular control that they have on-prem. But regardless, the report or the numbers here, we see a pretty good percentage of firewall still going into the Cloud.

Tim:
We also see an increase from last year, say, the firewall report last year where it said, 53% of the respondents had partial or full adoption of hybrid cloud, and now we see 72%. So we also see a greater adoption of the Cloud, which goes right hand in hand with what the analysts are saying too. It was either Gartner, I think it was Gartner, or maybe Forrester, that said, I think it was Gartner. But regardless, they said, almost 50% of the Cloud workloads or workloads will be cloud-based by the end of 2020.

Tim:
We definitely see a trend or an up here, and that was witnessed in our report as well. So interesting point to surface. One of the things I think we should all take note of though, again, is that complexity still made the top of the list, complexity in the infrastructure. And complexity comes in many forms, many shapes, many sizes, but complexity made the top of the list. And you’ll see from the next slide, we break it down a little further to look just from a C-level perspective. What did they consider the top management challenges, as well? So you can kind of look at these and then kind of look at their response as well. But definitely complexity. And I kind of define complexity. And when we talk about complexity, I really talk about those things that creep into the infrastructure over time, that’s unnecessary. And I’m not talking about the inherent complexity of a good security architecture, I’m talking about the unnecessary bloat that creeps into a firewall, unused rules, redundant rules, I talked about that earlier, overly permissive rule is probably one of the biggest forms of complexity that creeps into firewall policies over time.

Tim:
Where the security IT professional is not even a security professional, they’re an access manager. They’re not really managing the security on the firewall, they’re managing access on the firewall. And it’s not even relevant to the actual security policies that are in place. So the written security policies aren’t necessarily a reflection of what’s actually implemented on the security enforcement points throughout the organization. But we see that firsthand. It’s not unusual for FireMon to engage with a new client, and when we start helping them to clean up or address the hygiene of their firewall management, it’s not unusual for us to find 40, 50% unused rules on a legacy firewall, and I use the term legacy very loosely. Meaning a firewall that’s been in place three or four years can already have a buildup upwards of 50% unused rules, and that’s just unnecessary risk.

Tim:
One thing we know for sure, as complexity goes up in an organization, two things happen, the probability of human error also goes up, but also, the probability of risk goes up as well. So as we go down the list here, just to rank some of these complexity, the ability to clean up my firewalls or optimize my firewall rules, the number of vendors, and Randy hit on this at the end too, having clear visibility into the number of the heterogeneous nature of the network, especially large enterprise organizations, there’s a lot of heterogeneity that exists there. But that means also that I have to have people that are expertly trained on each of these different platforms that no, and if I lose somebody, there’s attrition or turnover, or whatever, I have to make sure that that knowledge transfer is taking place and that I haven’t lost the skill sets that I need to manage that security technology or whatever the technology happens to be, as well.

Tim:
So when you’re managing multiple… And sometimes mergers and acquisitions and things like that happen, and it brings additional types of vendor technology into play, and we have to make sure that we manage that effectively, right? Otherwise, we do have gaps in firewall enforcement which we see. And again, for the first time, we see lack of automation hit the list here. So look at this list here, and then as we go to the next slide, we see that the C-level responses and look what made the top of the list from the C-level perspective as well, complexity. And this is a good thing, meaning that there’s recognition, that there’s an understanding that there is growing complexity within our organizations, within our infrastructures, within our environment that needs to be addressed.

Tim:
Optimizing firewalls, the cleanup, lack of visibility, lack of automation even moved up higher in the stack from a C-level perspective. Again, I think this is a good thing. But then one thing that made their list that didn’t make the other list was firewall performance. So I found that interesting. And when I see that, I’m thinking and probably unjustly, or maybe not that sometimes it depends on the scenario. But the firewall gets the finger pointed at it a lot, even when it’s not the firewalls fault, and we see that a lot. I’m seeing that over time as well, too. But sometimes its performance either, it’s too slow, the network’s too slow, they blame it on the firewall, can’t get access, it’s blamed on the firewall, whatever it happens to be.

Tim:
But as we look at visibility, it’s really hard. We like to say it’s very hard to manage what you can’t see. It’s even harder to secure something that you don’t know about, and I’ve talked about this before. I actually borrowed this from my friend, John Kindervag. I was sitting in one of his presentations of Palo Alto Networks. He gave the example of the president being secured or being protected by the Secret Service. Imagine you’re the Secret Service responsible for protecting the president, but you don’t know where the President is at.

Tim:
So same way on the network. If you have an application, an asset, a resource, a service, and you don’t know where that service exists, you don’t know where that application sits, you don’t know where that resource or that asset is at or even know that it exists, how are you going to secure it? How can you apply the proper security controls to something that you don’t even know is in play. So having good visibility. So the survey here surfaced this, 34% of the respondents said they had less than 50% real time visibility into their network. The C-level said, they only had 28% of at least 80% visibility into the network. So kind of extract that.

Tim:
That’s saying that, I have almost 70% that I can’t see. But you can see here from the chart, so look at this, look at this chart here to the right, and you see there at the bottom, as it goes down, basically, it’s saying that confidence is getting higher here. When I look at the visibility statement from a C-level perspective, I see that confidence kind of wanes as it goes down the list. You can see kind of a drop off there, as you kind of compare the two. So a little bit of an interesting statistic there.

Tim:
Randy opened up with this as well, and talking about the state of the firewall. The firewall’s not going anywhere, anytime soon. Although, we hear quite often about the demise of the firewall. We hear that identity access is the new perimeter. And I’m not refuting the value of identity access management. I think there’s tons of benefit from good identity access management, don’t get me wrong there. But the firewall itself, it’s not going to die anytime soon. And in fact, in the report, what surfaced out of it was 95% of the respondents indicated that firewalls are as critical as they’ve ever been, right?

Tim:
So the reliance on firewalls, the need for good firewalls, the need for good firewall technology is still out there. We see firewalls being adopted in the Cloud, we see firewall as a service being adopted as well, but more critical or as critical as it’s ever been. And that’s also reflected in the fact that the security budgets, the amount of money, that they’re spending on network enforcement point technology is going up as well. We see compared to our 2018, Stay the Firewall report that a pretty significant increase in the percentage of respondents that said how much of their budget was going toward their firewall technologies.

Tim:
So support stuff, supports the finding from the previous slide. So just to kind of sum it up, and again, this survey this information is available at firemon.com. You can go to our website, it’s a resource for you. The state of hybrid security is a resource for you. Anything that we put out there, we’re trying to put this information out. Our hope is that it can arm you when you’re trying to justify budgets, or you’re trying to acquire technology, or you’re trying to provide education, across the lines of businesses, or maybe up to your executive management that you have some good solid data to go on that’s backed by people that are in your respective industry as well.

Tim:
But, some of the survey takeaways misconfigurations, code for human error, and we see that reflected in the news almost on a weekly basis as well, if not daily. Automation is probably not being leveraged at a level, probably it isn’t being leveraged at a level that it should be in order to provide greater efficiency and consistency, and hopefully reduce risk within the organization. Complexity is still the number one challenge, and complexity is one of those things that doesn’t go away on its own.

Tim:
Unless you challenge it, unless you make a concerted effort to reduce complexity within the environment, it’s not something that’s going to go away. And if anything, it grows, and if anything, the debt that you’re required to pay later on down the road may grow as well, especially if you’re breached. IT security resources are shrinking, yet change requests are increasing, that’s reflected in just the sheer volume of rules that are going into our enforcement point technologies. Either that’s being driven by new business deployment or segmentation strategies.

Tim:
I think it was, I want to say the statistic was 350,000 open jobs right now, in the cybersecurity field that are being looked for. So a pretty significant gap there, and firewalls are not going away anytime soon. But let’s take that last statement there. Firewalls aren’t going away anytime soon, and they’re as critical as they’ve ever been. It would stand to reason that they need the appropriate attention from a management perspective. If you’re going to gain the value, whether it’s what vendor it is, or the level of sophistication of the firewalls, if it’s that critical to the business, it definitely deserves to be managed as a critical component of your network strategy as well.

Tim:
So it needs to be managed effectively, you need to understand. And I think, again, as I said earlier, I think it starts at defining what security needs to look like around the application, around the assets, around the resources, or the services that are being deployed, whether it be on-prem, or in the Cloud, and we live in a hybrid world, it’s a hybrid network, it’s going to be hybrid for probably the future, but we need to define what security looks.

Tim:
It’s really hard to set forth your security goals, or define your security parameters, if you haven’t taken the time to identify at the end of the day what security looks like. And of course, it gets back into, what do I have that somebody else wants? And to what extent will they go to get it? And I have to measure my defenses around that. And how well funded are the bad actors that’s trying to come into get what I have? I’m not saying that there’s any silver bullets, because there’s not, but definitely we have to apply defense in depth in order to protect those things and protect the interest of our customers.

Tim:
So many different benefits to automation. I mean, we could talk about each one of these probably separately in a webcast but reducing human error, eliminating misconfiguration. Again, trying to try and do hit complexity head on and reduce that complexity in order to reduce human error, elimination of friction between DevOps and SecOps where you have agile development. I know a lot of you probably have agile development strategies that are undergoing. I believe, I firmly believe that DevOps and DevSecOps, all that is cultural, as well, and it needs to be embraced at all levels in order to get the organizations to work together and to form one.

Tim:
Increase in security agility managing. You can’t say no. There’s no doubt that as the business capitalizes on the technology that is available today, that the business has accelerated. And there’s no doubt that it’s accelerated past our ability to secure it in a consistent manner. But at some point or another, we have to have parity between security and the speed of the business. In other words, we have to be able to honor because the business is not going to stop and wait, right? The business is not going to say, oh, I’m sorry. My bad. We’ll wait for you to catch up. That’s not going to happen. They’re accelerating for the right reasons.

Tim:
They’re accelerating to innovate, they’re accelerating to gain competitive advantage in the marketplace, they’re accelerating for the right reasons, and the business is not going to stop. But we as security as the security arm, we have to look for ways to gain parity with the speed of business, in order to honor the business request. And I believe that becomes a collaborative exercise as well. Reducing costs even at the top as your management challenges you to provide key performance metrics around the tools that you have. I think that’s important too, because as we define our strategic initiatives each year, and a lot of times those strategic initiatives are driven by the technology that we have, or the technology that we need to further augment or maybe something new, management is going to want to know how we’re using what we have today and the value that we’re extracting from that. And so we need to be in a position to provide that.

Tim:
So think about what those metrics are that you’re going to provide to management whenever they come back and ask for that. And of course, compliance, the teeth around compliance are getting bigger. We see more compliance initiatives. We see especially in the area of personally identifiable information, from GDPR, to what goes into effect next year from California and the other attorney general’s that are looking at what California is doing, and will be following suit very soon as well. So all of that needs to be put into the right perspective from a security perspective.

Tim:
All of the new regulatory compliance initiatives they talk about, especially in GDPR, which I applaud, drove this, and that security by design and default. That means that you have to put security at the forefront of our processes, or else the fines potentially can be bigger. At FireMon, I’ll just quickly introduce you to a couple of things here at FireMon because we only have a couple of minutes left. And we’ll see if we have any other questions. But we’ve introduced a multi-level automation model. We understand you can’t boil the ocean. Automation is not something that you just, you set and forget and deploy. And as I said earlier, it’s not where we’re trying to eliminate the human factor either, we’re trying to enhance the human factor and make the human factor more efficient, less accident prone, and more consistent and give them back some cycles in their day so that they can focus on those higher skilled activities that they were hired to do in the first place.

Tim:
But there’s different areas that can be applied depending on the maturation of the organization and their ability and a willingness to consume the different levels of automation. And so we kind of outline some of these different areas of automation from where we can gain the most benefit from the low hanging fruit, maybe that’s in the design recommendation, and where we can automatically look at compliance to actually pushing those approved rules out, allowing the system to push the rule out, rather than me having to jump on multiple platforms and push that rule out manually, and maybe misinterpreting or, hopefully, no one takes any creative liberties as they’re translating from one system to the other.

Tim:
We still see spreadsheets and emails and Word documents and things like that being used to honor change requests. We have to get rid of that. That has to go because that ultimately falls short and fails to scale, and it gets us into this area where we’re not trusted as a security grid to honor the requests that are coming in when we can’t scale, or we can’t meet the speed of the business. But even where we define, I think we need to shift the focus to the security, as I said earlier around those resources and services that are being pushed out there and ensure that our security policies can be technically enforced, and that we can apply our golden rules and our guardrails, and that compliance that can be dynamically applied to the security access that’s being provided at the time that we implement it, prior to implementation and not us having to go off because of the large number of changes that have to be reworked.

Tim:
We need to find those things before they get implemented proactively, and that requires us being to sandbox a change in the context of the policy that it’s targeted for, to see if there’s going to be an impact in risk, impact to compliance, any type of impact to the security profile of our organization. The FireMon solution, we start with a very scalable platform. It scales to the level of the network regardless of how big it is, we start at the bottom with the ability to search in almost real time. But in the heart of it, we have the ability to apply compliance, dynamic compliance, of course, we’re all trying to manage risk to a level that’s acceptable by the business, and we need a way to quickly remediate that.

Tim:
But also we need a way to implement automation in a smart way where it’s consumable by the organization, and that we’re not managing the automation, the automation is helping us manage the environment, very important. And that’s really it, Randy. So if we have time, I know we only got like a couple of minutes here. But if we have time for maybe another question or two, either to yourself or myself, I’d sure like to take one.

Randy Franklin:
Okay. One of them is just about licensing. Jean would just like to know, how do you license based upon the number of firewalls managed or something else?

Tim:
Well, it really depends. But yes, it’s typically by the enforcement device. Of course, when we get into cloud, then we start looking at network security groups and BPCs. But we try to make it as flexible as possible, and try to make it as economically feasible as possible as well. But very flexible in our pricing strategy around the licensing of the technology.

Randy Franklin:
Let’s see. Gerald asks, is your product, what’s the footprint? Is it an appliance? Is it software? Or is it run in the Cloud, or what?

Tim:
Good question. We do provide an appliance. So there’s a purpose built platform, where the technology can run on that. For the most part, in the larger environments, it’s virtualized, so we can do both. Short answer to the question is we can do either. We can run in the Cloud, we can run in a virtualized capacity, we can run as a piece of equipment that goes into a rack. Again, it’s really your ability. How you need to consume it, how the organization needs to consume it. We’re flexible to provide the technology either way.

Randy Franklin:
Cool, awesome. Tim, thank you very much. And folks, thanks for spending time with us today. We hope this was valuable to you, and we’ll be in touch again soon.

Tim:
Thanks, Randy. Appreciate everyone’s time.

Randy Franklin:
Bye bye, and take care for now.

Read more

Get 90% Better. See How to Get:

  • 90% EFFICIENCY GAIN by automating firewall support operations
  • 90%+ FASTER time to globally block malicious actors to a new line
  • 90% REDUCTION in FTE hours to implement firewalls

SCHEDULE A DEMO