Automation: One Giant Leap for Security

On-Demand

Video Transcription

Holger Schulze:
Welcome to today’s Cybersecurity Insiders webinar, Automation: One Giant Leap for Security. Thank you for joining us today and taking time out of your busy schedules. Now, today’s webinar is brought to you by FireMon, the security automation solution that delivers continuous security for multi-cloud enterprise environments. My name is Holger Schulze. I am the CEO and founder of Cybersecurity Insiders, the online community for cybersecurity professionals. And I’ll be your moderator today.

Holger Schulze:
Now, it is my pleasure to welcome our featured presenter, Tim Woods. Tim is the vice president, Technology Alliances at FireMon and, Tim, thank you for presenting today.

Tim Woods:
My pleasure, Holger. I’m happy to be here today.

Holger Schulze:
Excellent. Tim, can you tell us more about the state of cloud security and how to approach security automation?

Tim Woods:
Happy to do so. And I just want to reiterate what you said too. I want to extend a big thank you and gratitude to those that are attending today. I realize that your time is very valuable. And you probably have 100 other things you could be doing, but you chose to spend your time with us today. And for that I’m very, very appreciative. So yeah, let’s do. let’s jump into this. I went out and I tried to find the best slide. And this is an eye chart by design. I tried to find the best chart out there that really kind of gives a view and this still just barely scratches the surface of the amount of technology that’s out there.

Tim Woods:
I’ve been to a number of trade shows this year, RSA, at the top there and at the top of my list and VMworld and AWS re:Invent and re:Inforce and all of these different shows. And I’m going to tell you right here and now, there is no shortage of technology out there, definitely no shortage. And in fact, it’s almost mind boggling.

Tim Woods:
I almost feel sorry for the companies as they’re trying to evaluate these things that they are trying to acquire or use or leverage to achieve some of their strategic initiatives. Many times, starting at the very C-level we have strategic initiatives. Those strategic initiatives quite often are driven by the technology that we own, or the technology that we we’re planning to upgrade or the technology that we haven’t yet acquired, but we believe that that technology can help us reach our goals. And of course, there’s the people aspect of that.

Tim Woods:
But it comes at a time also when C-level executives are looking to consolidate and to aggregate the solutions that they have in their tool belt. They’re challenging their teams to say, hey, can you quantify the return on the security investments that we’re making here, because these subscriptions, anytime you acquire new technology, there’s a ownership cost that goes with that, right? There’s the subscriptions, there’s the training of your people. You can have the best technology on the planet, or none, but if your people don’t use it effectively, and if it’s not managed effectively, then you’re not going to realize the return on that investment that you’ve made. And so no doubt the C-levels are now taking note of that, and I’ve even seen where they’re telling their teams that hey, in order to get a new one, we’re going to have to replace two, or there’s going to be a three to one transfer here.

Tim Woods:
But the other thing that we’re seeing also is across the technology landscape, is that C-levels, and organizations are also looking for tools that can span across the hybrid environments. Not just tools or solutions that work on-prem. And not just tools or solutions that work in the Cloud, but also solutions that can span across the hybrid enterprise in order to realize an even greater return on that investments that they’re making.

Holger Schulze:
Hey, Tim, this looks overwhelming, right? With all these technologies, how do users and how do organizations actually select and validate and find the right solution for their operations?

Tim Woods:
It’s a great question. It used to be in a land far, far away. Really not that long ago. I’ve worked with companies that had very elaborate lab setup and very elaborate proof of concept type testing environments, but today and we’re going to talk about this in just a few slides here as well, due to resource constraints and due to either some of the highly skilled people doing more repetitive mundane tasks, they just don’t have the bandwidth that they once had to go off and to evaluate some of these technologies the way that they once could. And so they’re turning to the analysts, they’re turning to other customers, they’re turning to their network, they’re turning to the vendors and saying, hey, you’re going to have to provide us customer references that we can talk to, they’re turning to their value added resellers, to their partners, to say, hey, have you validated this? Can you give us validation on this technology?

Tim Woods:
Because the last thing in the world that they want to do is to acquire a technology that doesn’t do what it purports to do or to acquire a technology and then have buyer’s remorse because there’s a cost and amortization that goes along with that. And so, yeah, no doubt that they are being a lot more careful in their selection. And again, it’s there’s no silver bullets, it’s not a tried and true process, and yet there is. It’s a struggle. It’s definitely a challenge and a struggle for corporations today.

Tim Woods:
We did a survey here recently, we talked to about 400, we actually talked to about 700 participants and of those 700, we qualified about 400 participants to get information, but it’s called, and you can see it on our website. As Holger said, there’s also a great handout here, the eBook on security, I definitely solicit you to go out and grab that one off of the webinar today. But also, you can visit our website and you can find State of the Hybrid Cloud Survey out there. It’s really chocked full of some very interesting things. But I’m just pulling one stat out today that I wanted to bring your attention to, because I think it’s relevant to the discussion of automation and how we can be more consistent in our security implementations.

Tim Woods:
But 60% in our survey, 60%, and we’ve seen this reflected by the analyst surveys and the analysts reports and research as well. But 60% of the company said that their business was moving faster than their ability to secure it. They’re faster than their ability to secure it in a timely manner. So, what happens whenever that takes place? We start seeing corners being cut, we start seeing sacrifices being made at the expense of something else. And many times it’s at the expense of security. The speed that we’re talking about, this velocity of business, and if I had to put an actual number on it, I might say that it was either 8X or 10X of what it was maybe five years ago. Because they’re taking advantage of that technology that’s out there today. They’re taking advantage and for the right reasons, I should say also.

Tim Woods:
I mean, from a competitive stance, from a market visibility stance, by being there for customer satisfaction, there’s a lot of reasons that people are migrating to the Cloud, adopting the Cloud-first strategy, embarking on their digital transformations, and the technology. But unfortunately, we’re going to talk about process here in a little bit, and automation processes. The processes themselves helping us to gain parity with the speed of business have not really matured in the same manner, that the technology that allows a business to accelerate has, so. But we’re going to talk about that.

Tim Woods:
Before we do that, I think it’s important to note one of the key challenges that we have is a cybersecurity skills shortage and it won’t take but a second, if you google this, you’ll find all kinds of information about this. This particular research was done by the Enterprise Strategy Group and ISSA and I’ve personally witnessed this first one here. 67%, they say the skills shortage has increased the workload on existing staff.

Tim Woods:
Sometimes when I talk to our customers during an initial engagement, they tell me straight up. Tim, it’s not that we don’t know what has to be done, it’s not that we don’t know what we should be doing, it’s having the time to do it. We have 15 priority ones on our plate and I can get to two of them in a given time period, and those other ones have to be pushed to the side and resurface. But by the time they resurface, I have three more priorities on my plate. And so the skill shortage has definitely caused an impact to the existing as it relates to the workload that the current staff is currently carrying on their shoulders.

Tim Woods:
The inability to learn or utilize the full potential. Here’s the other security technologies that they have, and this too, even with our own products, I’ve seen customers tell us, look, I know I’m not getting the full value out of this platform, just haven’t had the time to extend it to the rest of the team or to train or to take time off it to learn some of the things, help us extract greater value out of your products given the limited resources that I have. Training junior personnel, and there’s an expense to that. Even though you may hire a junior personnel, because you can’t find the level the skills that you’re looking for, but still you’re trying to bring a junior personnel up, that takes time and effort too if you’re going to do it right.

Tim Woods:
And probably one of the biggest ones, even though it says 40%, I think that’s a very significant number, 40% claim that cybersecurity staff has limited time to work with business managers. In this new era of cloud and this cloud-first adoption, this is unfortunate, because we’re actually seeing a fragmentation of security roles and security responsibilities. We’re seeing new silos pop up between businesses and stakeholders. And DevOps, we’re seeing actual friction develop even between some of these different departments and staff. And that’s really unfortunate. But it comes back to this cybersecurity skills shortage at the end of the day.

Holger Schulze:
So Tim, quick question, right? So combine this with a growing threat and worsening threat landscape, right? It’s basically a perfect storm building?

Tim Woods:
Yeah. I mean, that’s a perfect way to put it. It is a perfect storm building, and we’re seeing some of that take place. In this slide that I have here now, these headlines are just all too common, all too frequent. The one in the top right there, the one that talks about Malaysia’s Malindo Air passenger breach, this just happened. This was three days ago, that this was exposed. And so again, a very quick google search brings all these up to the top. By the way, that was an AWS bucket, it was a data exposure.

Tim Woods:
AWS was quick to point out to say, hey, our servers were operating exactly as they were designed to do. There was a misconfiguration. And that’s what we see. When we have a shortage, if we’re moving too fast, if we’re trying to honor the speed of the business without doing our complete due diligence. I mean, we’re seeing we’re seeing not the traditional security IT teams take responsibility for some of the applications that are being deployed in the Cloud, but we’re seeing DevOps, we’re seeing stakeholders, we’re seeing new cloud security teams pop up.

Tim Woods:
One of the problems that we’re seeing, and the customers are telling us is that while these are very smart individuals, and it’s not that they don’t know what to do, and what they should be doing, again, going back to my previous point, but some of them are just not well grounded when it comes to security, and so mistakes are being made, but they’re not waiting on security. The business isn’t stopping, the business isn’t saying, okay, we’ll wait until we can bring the security team with us here to do what we need to do. And unfortunately, we’ve all heard the security team sometimes is viewed as a department of no, and business is moving forward. And so it, we have to find a way that we can embed security into the process.

Tim Woods:
Is we’ve seen some of the newer regulatory compliance initiatives, take GDPR as an example, it talks about security by design, and default is the spirit of the language. And I think you’ll see other regulatory compliance initiatives adopt that as well. And I mentioned some of the shows that I had attended this year, one of them was VMworld. But it was one of the customers that was on stage that said, at one point, we realized that this was a lead security architect, he called it our traditional ticket punching, our traditional change control process was not honoring the speed of the business any longer. So we had to stop and break things down and learn how to embed security into our processes of deployment.

Tim Woods:
So this was a company that was really a little further ahead down the road. I’ll also say in defense of cloud and in defense of AWS saying, hey, we did what we were supposed to do, they really are adding more functionality as it relates to security into their public offerings, and we’ve seen that firsthand. At AWS re:Invent last year, they introduced specific features around S3 buckets around making sure that they were configured and automatically blocked so that you didn’t inadvertently expose data and especially unencrypted data that wasn’t vetted to audiences.

Tim Woods:
Hackers today, they’re not even hacking, they’re using automation tools. They’re using automation tools, they’re using scripts to go out and comb the internet looking for public IP addresses that have unchallenged access where they can grab data that’s just hanging out there in the wild. So it’s something that we’re, unfortunately, in the state of affairs today that we’re at that is going to continue to happen. But the good news is, I think that there is positive change on the way.

Tim Woods:
This one may come as a surprise, and it may not. But one of the costliest impacts to business continuity is not necessarily misconfiguration. Although, that’s a big one. But one of the surprises here is that it’s approved change, things that supposedly have already been vetted, things that have already been scheduled for implementation, targeted for a particular maintenance window. But in this particular survey, and feedback, we saw that 83% of unplanned network outages were a result of a mistake being made during the approved change. And I can’t blame everything on the firewall, but in this case, 70% of those were directly related to the security enforcement point itself, which is quite interesting, as well. And even though the cost of an actual breach, when we go and look at statistics and things like that, the cost of a non-breach is higher than the cost of a non-breach.

Tim Woods:
However, we’re seeing change-based outages is 97%, more likely. So no doubt that we have changed freezes, no doubt that during very critical times of operation, that we go into frozen change windows and things like that, because we don’t want to change impacting availability of our critical business services. So very, very important. So no doubt that that takes place.

Tim Woods:
The bigger problem here, and I’ll give the audience a second here to kind of consume this slide. I originally created this slide, the little dots there represented the amount of rules, the sheer volume of security enforcement rules that the IT security staff was tasked and charged with managing. And we’ve seen that go from, back in the day, God, I’ve been doing this longer than I want to admit. But when I first saw my first firewall that had 10,000 rules on it, it just blew my mind. I was like, how can you possibly have 10,000 rules on a firewall? And now it’s commonplace, it’s not uncommon to find a security enforcement point that has 30,000 40,000, even 100,000 rules on it. So it’s just continued to go up into the right.

Tim Woods:
Mergers, acquisitions, regulatory compliance initiative, micro-segmentation, cloud adoption, all kinds of reasons for these rules going up and also bloat. And that’s why when I talk about the complexity gap, I’m talking about this inability to manage some of these things that can come back and do harm to our infrastructure. And because the resources have not kept up with the pace of the challenge. And so we see this same thing starting to take place, not just with the security rules on the enforcement point, but we see it even with applications, with organic sprawl of applications that are going into the Cloud, that customers don’t even know about. Remember, I said something about a client, we were talking to a client that had more public cloud instances than they thought. They thought they were dealing with two vendors, two public cloud vendors, they found out through a scan, we were determined that they actually had four that they didn’t even know about.

Tim Woods:
But applications are actually going into the Cloud, without consideration without due consideration, with the proper security controls being placed around those. But here’s what we find as this complexity gap goes up, and it’s a real thing. And I’ve said this many times even on past webinars that I’m not talking about the inherent complexity that comes with a really good security implementation, I’m talking about the unnecessary complexity that creeps into our environments over time, that wreaks havoc to risk because as complexity goes up, as complexity continues to rise and unchallenged complexity continues to go up, the probability of risk is definitely going to go up.

Tim Woods:
The probability of human error creeping into the equation is definitely going to go up. And so unless we start to challenge complexity, unless we start to look at things that can help us improve our security posture as it relates to unbridled, unchallenged complexity, then you can expect bad things to happen going forward in the future. And again, while I said it was related to enforcement points at one time, this can really be applied to a number of different things that are taking place in the hybrid infrastructures today.

Tim Woods:
So if you’re not increasing your resources, if I look at this line across the bottom here, the resource line going up, it’s not going up at the same rate of speed that we see the highly challenged line going up to the left. If you’re not adding resources to meet that challenge, then what do we do. And the reality is we have to automate, we have to make our people more efficient, and that’s just one. Efficiency is just one of the many benefits of automation, there’s a lot. Consistency, reducing the mistakes, competitive advantage, we talked about that, performance, network dynamics, simplification. You guys can read the ones that are out here. But we could stop and talk about probably each one of these little circles, bubbles, individually for quite a while and the benefits that could be gained in these given areas, or arenas, we’ll call them.

Tim Woods:
But probably the biggest one that I want to pull out and talk about is the red one with the exclamation point there, and that’s risk, and I just want to hit on that. In today’s hybrid environment, and in a good security cloud posture, it’s very important that we are always looking to mitigate risk, but we’re always evaluating risk on a continual basis. And if you’re not continued, because of the rate of change that exists, especially, when we start talking about cloud, especially when we start talking about containers, and virtualization and things like that, the frequency of change just goes up significantly.

Tim Woods:
And so, you have to be monitoring for change, you have to be evaluating change as it takes place, and you have to be looking at that change to say, hey, has that change introduced unacceptable risk to our environment? Does that change need to be mitigated at that point? And failure to look at risk, now mind you, you weigh that against the value of your data, you weigh that against what you have that somebody else wants, and if you have something that is highly valuable, and you may be the target of a well-funded organization, then you have to compose your security, defenses appropriately, based on who wants what you have, and the value of that data.

Tim Woods:
Sometimes that’s the integrity of the company, the reputation of the company, it’s a responsibility to the shareholders, if you’re a public company, mitigating that risk and evaluating risk on an ongoing basis is very important. But at the end of the day, I mean, that’s what it’s all about. We’re trying to manage risk at a level that is acceptable by the business.

Tim Woods:
I found this little cartoon drawing here. I’m going to put this. I swear I’m going to print this out and stick it on my own wall. I just think it kind of, the nice little red chart with all the colors and stuff is good. But I think this kind of, this little cartoon really sums it up good. It’s a perfect example of kind of the probabilities of risk, and what happens is complexity goes up and unmanaged complexity. And it’s just a, it’s a neat graphic that you can talk to. So anyway, I just threw that in there. I thought it was interesting for you guys.

Tim Woods:
And so as I talk about these things, it’s not just what the analysts are saying, and it’s just not what we as a vendor in this market sector are making up. I mean, this is coming directly from the customers, large enterprise customers that we deal with, and that we’re proud to own their trust or have their trust and the investments that they’ve made in us. Again, the skill shortage is real. It comes up a lot in conversations with our customers. The vendor consolidation, the aggregation of the selected point products, or the different security platform products within the organization. How do I extract greater value out of those? How do I quantify the value that I’m getting out of those? How do I know that that upgrade is going to deliver the upgrade to the technology that the salesman’s trying to sell me on? How do I know that I’m actually going to realize that?

Tim Woods:
The triggers in the SOCs, the incident responses, incident saturation, event saturation, they are struggling to comb through that. And I know, there’s a lot of technology solutions out there that have a focus to help reduce that saturation to pull out those things that they should be looking at, and not let them get through the 80% that they’re not able to evaluate. And top of mind for everybody, of course, is cloud migration, cloud adoption. It’s mission critical, it’s number one, if we’re going to remain kept competitive in the marketplace, these are the things that we have to take advantage of, and at least we forget compliance. Compliance, ranked right up there with risk and the teeth around compliance are getting bigger too, right?

Tim Woods:
The penalties that can be levied as a result can absolutely be crippling to organizations today. Now, if your Facebook, maybe not, what was it? The FTC recently leveraged a $3 billion fine on Facebook, probably what they make in a day. And so it didn’t really… They’d actually set aside even more. I think they’d set aside five billion and they were assessed three billion. So they thought they got off easy. But imagine, what that could do to a smaller organization. And so the compliance fines and the governance fines can be very real. GDPR it’s starting to set certain precedence as well, in that arena.

Tim Woods:
So, it’s very real. Customers are taking note, and they’re definitely looking for help, and the C-level execs are definitely starting to realize these. And so, again, if you’re monitoring for change, understanding what that does to your compliance posture is very important. And so as we talk about change, we want to look at some of the triggers for policy change. All the different changes, it starts by a trigger, and there’s really just two types, it breaks down into kind of two, what we’ll call two categories here, and it’s either real time event driven, such as a security event that takes place, or its process, it’s a business driven process that’s taking place. We’ll look at that kind of first. Maybe we’re trying to scale up an existing service, maybe it’s during holiday hours, or maybe during seasonal market, and we need to increase greater capacity for the demand for our website, or whatever it happens to be, but there’s that process that kicks off to make that happen.

Tim Woods:
There’s a new service rollout, to meet a business objective and a deadline because of a new service that we’re offering that’s planned. But then there’s the security threats that take place too, kicked off by the SOARS and the IDS and things that we detect. And what’s interesting here is you look at these things, I mean, the desired response to these policy changes, the desired response to these triggers, sometimes are minutes, hours, days, but the reality that we see and the reality that is currently happening today, sometimes is extended. It doesn’t happen in minutes, but it happens in days. And it doesn’t happen in hours, but it happens in weeks. And it doesn’t happen in days, but it happens in months. And so we got to get back, if we want to achieve these desired timelines to react to a policy change, or to implement a policy change, then we have to kind of look at things differently.

Tim Woods:
We have to understand as change happens, how do I evaluate that change, and how do I react to that change, and how do I implement that the proposed change within my respective processes? And, again, change happens across the infrastructure frequently, but it happens even more frequently as we start looking in cloud environments or CI/CD is a big one today, dynamic workloads, containers, containerization, things spinning up and spinning down. But when those things do spin up, or spin down, and when they’re deployed and when those applications are deployed, how do we know that the right security data controls are accompanied with that deployment?

Tim Woods:
So a lot of things that can be. But this is a perfect place to start talking about automation, these triggers to change. And we’re going to start with the process first. We’re going to start talking about the process. Now, I put this up here, purposely, we’re going to look at a little different visualization of this particular traditional. We’ll call it a traditional SecOps workflow. But it kind of starts here. One of the first steps in the roadmap, we’ll call it the road to automation, is really the validation of your existing processes.

Tim Woods:
It’s really looking at what we’re doing today, or what the process should look for something that we want to automate, understanding what’s working, more importantly, what isn’t working, and why that isn’t working, because you don’t want to accelerate a broken process, right? I’ve heard it, say, accelerating a process that’s broken only gets you to failure faster, right? So that’s not what we want to do. We don’t want to get to failure faster, we want to get to success faster. And that’s very important. But it all starts in understanding your processes today, and what that looks like.

Tim Woods:
So, I love this graphic, I absolutely love this graphic, because it is for us, for FireMon it points directly to where we’re helping. From an automation platform perspective, we look at the events on the left hand side here in the bubbles. You’ll see these are some of those triggers that we talked about. Both event trigger, I’m getting tongue-tied here. Take a drink. Event triggers and process triggers.

Tim Woods:
But from SOAR, we’ve listed some probably well-known security companies that anyone probably everyone here recognizes, also ITSM, Service Desk, ServiceNow, JIRA, Clarify, Remedy, things of that nature that can kick off a workflow or a request, environmental changes, obviously, in the CI/CD pipeline where we’re continuously updating and deploying applications, that’s important. But the circles that you see on the bottom there kind of represent that traditional security operations path. And many of these events whenever they take that path, there’s lots of rooms for improvement along this path.

Tim Woods:
Make no doubt, there’s a lot of room for improvement where you can automate pieces of this process, and we’re going to talk about some of the different phases of automation. I think you will enjoy that discussion that’s coming up in the very next slide. But moreover, we find that somewhere around 50 to 60% of the changes can actually be templatized, and moved to what we’re calling a fast path. This is what we’re calling our continuous adaptive enforcement fast path. And that’s where we want to offload some of the really repetitive, frequently redundant efforts that’s consuming a lot of resource work cycles.

Tim Woods:
I hear it all the time. Tim, I have the best people. My best people performing too many reoccurring tasks that I believe could be automated. It consumes too much of their daily work cycles, and I really hired them, and I need them to be spending that time on higher skilled activities. How can I make my teams more efficient? How do I offload these mundane frequently reoccurring tasks? And so to do that, you have to evaluate your processes that you’re currently using today to determine what are the things that I can actually automate, and how do I automate those as well?

Tim Woods:
I even have one here where it talks about, you’ll see the bubble here email and spreadsheets. Spreadsheets’ probably number one. I see more manual tracking using spreadsheets than probably we should ever end emails too. I see emails. I’m not surprised anymore when I see that change requests are being tracked via email strings or email threads. But then also I see information from spreadsheets being manually transferred into ITSM systems as well and in and I always see them fall short too, because again, it comes back to the human element and if the human element can’t sustain that input or can’t keep up with the input and people get sick, people take off, there’s vacation, other priorities pop up and next thing you know, our tracking system is falling short or it’s not up to date or doesn’t have the right documentation, we go through an audit, and we get dinged on it. But yeah, we see that quite a bit. Perfect area for automation, primary-

Holger Schulze:
Hey, Tim. I hope you’re seeing the email spreadsheet and manual tracking on the decline, right? Or is that still?

Tim Woods:
… Yeah, we do see it, we do see it. I would say that it is declining, but it still pops up. Here’s the interesting thing, Holger also is, at first I thought, well, maybe it’s dictated by the size of the company, maybe it’s dictated somewhat by mid-size or smaller companies where maybe they don’t have the same technical depth or the amount of resources, not everybody it seemed. Across the market, everyone’s strapped for resources, or there’s resource constraint. But I see it both in large organizations, very large enterprises, and then I see it in mid-size as well. But yeah, I would classify it as on the decline, knock on wood. But it’s definitely something.

Tim Woods:
It’s not one of those… Even homegrown, what I’ll call homegrown databases, or implementations, some of those, the individual that creates it, the intent is good, the spirit of it is good. But the sustainability and the maintenance of it, especially if something happens to the individual that created it, that individual goes on or moves or is promoted or takes on other responsibilities. And all of a sudden, again, essentially, the developer of that tracking system is no longer available. And so that can have an impact to the business as well.

Tim Woods:
So let’s get through these, because I’m looking at my clock here, and I’m running out of time. There’s still some good stuff to talk about here. Really, at FireMon, we’ve taken an approach where we believe that there’s multiple levels of automation. And I think this will be music to many people’s ears. But let’s start with the block that I’m not showing you here, the block is the manual, and how we write that, that’s where it starts in the email, and the spreadsheet, and hopefully, that’s going down. But that’s really, that’s kind of the manual process, and not where we want to be at all.

Tim Woods:
The first kind of step is what we’re calling automated design. An automated design is where we have a system that helps us with design recommendations. As an example, I need to grant access to someone new in the HR department. What path does that access need to take? What security enforcement points do I have to touch? What does the access path analysis say? Today, when they do that manually, I see security individuals, I see them use Ping and SSH and Telnet, and Finger and all kinds of different things, trying to identify paths, but having a system that can give you a design recommendation that has already examined kind of network topology behavior, and to give you that can really be a time saver, a really nice time saver.

Tim Woods:
Also being able to say, hey, this proposed change, can I sandbox that? Kind of in a proactive position, can I sandbox that change and evaluate it against our compliance requirements? So rather than implementing the change, and then detecting that that change broke something, i.e., we go back to our plan changed impacts that we talked about earlier. Let’s get proactive and say, hey, let’s look at this proposed implementation, run our best practice against it, run our compliance initiatives against it so that we have a little bit of foresight into, is this going to cause an impact? Is this going to change our risk posture? Is going to introduce unacceptable impacts to our compliance posture? And things like that.

Tim Woods:
So that rather than implementing it, and then having to react to it, we can proactively respond. So that’s kind of the first step in the automated design process. Next, we get into this automated implementation. So not only do we help with the design, and this kind of builds on each other, you carry the ones from the first phase into the second phase, but in the automated implementation phase, we add the ability to automatically implement the rules that we’ve approved, right? We’ve looked at the rules, we’ve looked at the design, there’s probably been a team that has approved that, or there’s automatic rule verification to say, hey, this does pass our compliance objectives and it’s not causing undue risk, it’s not causing an impact to the compliance structure. And then we can just automatically go ahead and either stage that on the native security platform that needs to implement it or we can implement it directly to the enforcement point itself.

Tim Woods:
But then we need to also verify, we need to make sure that we did what we said we were going to do, right? We don’t want anybody taking any creative liberties along the way. We want to make sure that the changes planned was the change that was implemented, what we call an as-built, that it was implemented as built and approved. And so there needs to be that.

Tim Woods:
In the automated design phase before, that’s kind of a manual process. So we want to automate that as well to make sure that when we detect the change that was implemented, we can reference that back to the proposed change, and those two should look like each other. The other thing I think is really important is capturing the documentation, automated change documentation. This is so important today, and this is an area that I see fall short. So often, too is where we don’t capture appropriate business justification for what’s being there and tracking that, and when does that rule need to be reviewed? And these show up in our QSA audits, in our compliance audits, these things will show up saying, hey, can you show me documentary proof of that?

Tim Woods:
As we get into the next phase, the zero-touch automation, this is where things really get fun, because now we’re starting to talk about policy abstraction. We’re talking about a centralized policy that can be technically enforced. We’re talking about adding, instead of looking discreetly at the security enforcement point, we’re talking about looking at, what does our security at any given point in time need to look like around the applications that we’re deploying? What should our golden rules? What does our policy application guidelines state? And how can we technically enforce those, and refer to those every time that a proposed rule is implemented?

Tim Woods:
So this is where we can start looking at creating a centralized policy that can be collaborative in nature, meaning that we can invite the participation of the business teams and the compliance teams, and the architecture teams and the operation teams and the business owners into this so that we can help automate the process of that. And also this is where we start talking about integration with the SOAR vendors. We saw some of the triggers previously with SOAR and some of the CI/CD pipelines things working, how can we integrate change so that whenever a SOAR, I’m getting tongue-tied again, when a SOAR playbook kicks off, how can we help honor? Or how can we help enrich that playbook and be a component of that playbook that automatically implements the appropriate response to a given event?

Tim Woods:
And then last, but certainly not least, and one of the things that we’re very excited about is the continuous adaptive enforcement. So again, here is where we have what we call a policy compute engine that I’m going to talk about, there’s patents pending on it. But this is where we react to change within the environment. In any good environment, you always have to be looking for change within the environment. I talked about that earlier, is being able to understand when things move or change, or there’s an impact, a business continuity, we have to be able to identify those things as they happen and react to those things as they happen very quickly, and at scale. That’s the other part of this.

Tim Woods:
If you can’t do these operations, and perform these automated tests at scale, then it’s really going to fall short, and it’s not going to give you the desired result that you’re looking for a long tournament. So you have to be able to achieve these things at scale. But for example, somebody comes in and makes a change to a policy and they put in an overlapping address space, that blocks services on another rule inadvertently. Being able to detect that automatically and set it back and having the system do that for you automatically without human intervention, and to prevent that impact or to restore business continuity almost immediately, is just priceless. And so that’s when we talk about continuous adaptive enforcement, that’s what we’re talking about. And if something moves, we want to make sure that the controls move with the data, that the security controls move with the data. And by the way, also, it’s important to note and I think is something that I definitely want to bring to light because this is a struggle and this is where unnecessary complexity creeps into our policies over time.

Tim Woods:
But being able to clean a policy up to automatically remove a security enforcement statement that’s no longer needed,, or a security enforcement rule that’s no longer needed, because the thing for that particular enforcement point the application that we’re protecting has moved to a different control zone, and we’re enforcing that with a different enforcement technology or a different enforcement platform, or a different enforcement instance, we no longer need that previous rule, so we need to get rid of that automatically. We need to remove it, we need to clean it up, we need to perform dynamic hygiene to our policies as well. And so by doing that, we don’t get this bloat in our policies over time, and this unnecessary complexity doesn’t creep in to our policies over time. And therefore it has a very positive impact to our security posture.

Tim Woods:
So I’m going to speed up just a little bit here, because I want to save time for some questions at the end. But the policy I talked briefly, I introduced the policy compute engine, and it’s so critical to what we do because the policy compute engine, this is part of our secret sauce, I’ll call it. We consume context, from the network we consume, we create logical tags around that, we create the guardrails, we create the compliance rules, the application port guidelines, everything that we’re going to check, whenever that request comes in for access and the system wants to automatically honor that and deploy it, then we can automatically we have the necessary context from the environment, and we can select the devices that are in scope, and then we can automatically push that.

Tim Woods:
As long as it meets our golden rules, our guardrails, as long as it doesn’t break any of our regulatory compliance requirements, as long as it doesn’t add unnecessary risk to the environment, otherwise, we got to kick it out, and then of course, we do have to have manual intervention to remediate that. But ideally, we want to be able to do that automatically, select the devices that are in scope, activate on those devices, and then continually enforce, but then also, we want to go back and recheck, we want to go back and evaluate. And again, this goes back to what I was saying earlier, in the continual adaptive enforcement, making sure that we have persistent security, we’ll want to make sure that our posture has not changed. And so we’re always going back and rechecking that, and if necessary, we update the rules.

Tim Woods:
As an example, perhaps we’re sucking in vulnerability scan data, and we find out that there is a new CVE that points out to a particular security high port are something that we need to go off and automatically blocked because we have some rules across our real estate that’s allowing this high port, and could make us susceptible to this particular ugly, new polymorphing virus or whatever it happens to be. So we want the system to automatically take care of that for us.

Tim Woods:
I’m going to go through these very quickly, because these are just operational, it really gets down to efficiency, time and money, time and people, right? If you can put time in a bottle, people are going to buy it. But it’s really what we’re talking about. We’re talking about giving time back in the day to our people, and allowing them to use the systems that are in place more effectively. So whether that’s from an operational efficiency perspective, integrated security efficiency, bringing more total value out of our combined solutions, or helping with the Cloud migration strategies and the Cloud first adoptions and digital migration efforts, all of those can be enhanced with automation.

Tim Woods:
I’ve got a couple of a couple of examples I wanted to use here. This first one was a real world example of an online retailer. 35% of their time was spent writing manual rules across 25 Juniper firewalls. Now they had a lot more firewalls, but this was just their Juniper pile. And they only have three admins. And these admins, by the way, were not just doing security, they were doing everything. This was just part of their daily administration, part of their responsibilities. But anyway, the solution, using the security manager platform, we were able to clean up the rule base, we were able to basically enhance the efficiency of their operations. And they really felt like that it even extended the life of their Juniper products. So that’s always a good win. Anytime that you can put more life into the hardware that you’ve that you’ve made an investment in.

Tim Woods:
The second one and I just talked about this was a hospitality organization. I talked about this briefly. But in this case, they wanted to integrate SOAR triggered changes across multiple firewall vendors that they have. And they wanted to be able to take in this malicious IP list and relay that against the policies that were enforced to say, hey, if we have any of these malicious IPs that pop up, we don’t want to automatically trigger a playbook play that goes off to block these things automatically using the API’s between our vendors and so this was a big win, and it assured the reliability of them not missing any of the of the triggers to go off and block some of those malicious actors from doing bad things as we know these nefarious individuals can do. And last, but certainly not least, I would be remiss if I didn’t talk about the API’s. It’s so important. It’s so important.

Tim Woods:
We’re such a big believer in a robust API set, because that is the key to integrating your solutions across the board. Being able to exchange information readily, and bring it up, it’s just a way to bring up the total value to the combined systems that you are using today. So really, really important. As you’re looking at the vendor solutions that you may be evaluating today, I would just urge you to consider even, if you’re not using them, or you don’t really see where you’re going to use it today, look at what their stance is on embracing an open API structure, a secure, I might add, a secure open API structure as well, because that is going to pay you back dividends in the future. Very, very important.

Tim Woods:
Last, but certainly not least, I just want to give a plug to our FireMon solution platform. I wanted to give you a quick snapshot of what that looks like. Guys, this is a holistic platform. This covers everything from discovery, identifying change as it happens, evaluating that change, making sure that we’re not adding unnecessary risk, that we’re able to mitigate that risk as it is detected, making sure that it scales, making sure that it scales to the level necessary to meet the needs of the network, and that’s on-prem, hyper converged data center all the way to the Cloud. So making sure that you can scale.

Tim Woods:
Again, all of these things that I’ve talked about today, if you can’t do them at scale, then you’re going to fail to realize the return on the investments that you’re making. So it’s important that we can do these things. I’ve seen too many automation strategies fail because they either didn’t scale or there was more maintenance related to their automation than the benefit that you got out of the automation, which is not a good place to be as well.

Tim Woods:
So security manager is the core piece of the platform. Lumeta is our discovery and visibility component. Policy optimizer, is looking at automated rule recertification policy planner, automated workflow, global policy controller, that’s where the policy compute engine comes into play. That’s the collaborative security intent orchestration platform piece. And of course, risk analyzer, we always have an eye on risk, we’re always looking at risk, how do we mitigate risk? How do we reduce risk? How do we make sure that risk in our environment is at a level that’s acceptable to meet the needs of the business today?

Tim Woods:
So there we go. We’ve got about five minutes left. I want to go ahead. Holger, I’m going to turn it back over to you, and if we have any questions. I went a couple of minutes over, but hopefully I can address a couple of questions from the audience before we get off today.

Holger Schulze:
Thank you, Tim. Yeah, let’s jump right in. And the first question here from our audience, what are some of the misconfiguration issues you’ve come across?

Tim Woods:
That’s a great question. Probably the biggest and the most notorious, I’ll say, is overly permissive rules. Overly permissive rules is the bane of most security professionals. When you put an overly permissive rule in and you’re allowing more access and what is necessary to meet the needs of the business, then all kinds of bad things can happen. Auditors look for these things too, for too much access. So I would say, top of my list, and that includes, I mean, there’s redundant rules and shadowed rules and duplicate rules, and all these other things and unused rules that become stagnant and inadvertent access pops up, but they’re overly permissive rules is really what kills a security policy in general. So making sure that those rules are tight, making sure those rules are really meant and are designed to just allow the access that is necessary to meet the needs of the business is just paramount. Very, very important.

Holger Schulze:
Thank you, Tim. The next question, can I deploy multiple levels of FireMon Automation or am I limited to one?

Tim Woods:
That is the most exciting part about the FireMon Automation platform offering is that it grows with, you can actually implement different pieces of it in different parts of your organization as you need. So again, it relates to the confidence that you have in the areas that you need to automate. We’re not trying to boil the ocean here. It’s not set it and forget it, it’s not one size fits all, and that’s why we introduced these phases of automation. Because not everyone is at the same level, there’s different areas of the business even that the maturation is different. And so you need to match the automation appropriately for those areas. So no, it definitely can grow with you, and it can, all the way from, like I said, design automation all the way up to continuous adaptive enforcement.

Holger Schulze:
Thank you, Tim. All right, looks like we have time for one more question. And the question is, what are the ways FireMon can help to optimize security policy?

Tim Woods:
That’s a great question, too. It really goes back, to the first question that the first individual asked was about, what do we see the most often? Being able to apply good security hygiene to your environment, I think is kind of step one. Everything else kind of falls in place. When you’re doing things that contribute to good security hygiene, meaning, if I’m doing things that help get rid of those unused rules we talked about, if I’m doing things to identify those unused rules before they become a problem, if I’m doing things to make sure that I don’t have technical mistakes in my policy, things that creep into the policy that don’t need to be there, redundant rules.

Tim Woods:
One of the first things that we look for in most of our automation routines, and every one of our automation routines is to say, hey, the proposed access that you’re requesting, does it already exists? You’d be surprised how often duplicate access is placed into a security policy, and it doesn’t need to be there. So these are just things that add to unnecessary complexity of a given policy. But being able to, our platform identifies that. We do policy behavioral analytics, we do discovery of change, dynamic discovery of change, which I think is critically important. But again, every time a change happens, we have to ask the who, the what, the when, the where, the why, did that change happen during normal business hours? Did that change happened during a prescribed maintenance window? Was that a system change? Does it have documentation attached to it? Was there a change control number?

Tim Woods:
There’s so many questions that need to be answered, when change happens. And if you’re sticking your head in the sand, and not looking at change, I can promise you what you don’t see can hurt you, especially as we get into cloud and we see some of these configuration issues that are popping up. I think it was Gartner that said, 99% of outages are caused by misconfiguration. I’m going to have to say they’re probably not far off. So making sure that you have a system that works for you automatically, and that you don’t spend a lot, and that it doesn’t work you, is important.

Holger Schulze:
Excellent. Thank you, Tim. All right, and with that we are at the finish line for today’s session. And as we’re closing this webinar, I would like to thank all of you for joining us. I hope you enjoy today’s presentation. I would also like to thank you, Tim, for sharing your insights on how to automate security. Thank you, Tim.

Tim Woods:
Thank you very much, and thanks again to our audience.

Holger Schulze:
Excellent. Now this concludes today’s session. I hope we will see all of you again at one of our future webinars. Thanks everyone. Have a great day.

Read more

Get 90% Better. See How to Get:

  • 90% EFFICIENCY GAIN by automating firewall support operations
  • 90%+ FASTER time to globally block malicious actors to a new line
  • 90% REDUCTION in FTE hours to implement firewalls

SCHEDULE A DEMO