Expand Your FireMon Journey With Automation

On-Demand

Video Transcription

Elisa Lippincott:
Good morning, good afternoon, and good evening, everyone. My name is Elisa Lippincott, and I’m the director of product marketing here at FireMon, and your MC for today’s webinar on FireMon Automation. Thank you so much for joining us today. This webinar is exclusive for our FireMon customers, and we’ll discuss how you can take what you’re already doing with FireMon and take it to the next level with automation. If you have any questions, please submit them in the Q&A section on your screen, and we’ll address them at the end. We’ll send a recording of the webinar out afterwards so that you can share with anyone else on your team. Now, I will turn it over to Tim Woods.

Tim Woods:
Elisa, thank you very much, and to our listening audience there today for your participation. I talk to you guys all the time and I know you have a thousand things on your plate at any given point in time, so I really appreciate you being here today with us. I sincerely hope that you get some good information out of this. As Elisa said, make sure to ask those questions and jot them down. If we don’t get to your questions today, I’m definitely going to try to save some time at the end to answer your specific questions. But if we don’t, don’t hesitate to send us an email and we’ll definitely get all those questions answered.

Tim Woods:
Let’s go ahead and dive into it. Guys, I went out and tried to find the busiest slide I could. If you can’t read this, it’s an iChart, it’s supposed to be an iChart. I did this for you. I did this to try to make a point. There’s so much technology out there today. I’ve been to God, I guess six, maybe seven, if I can, symposiums and partner events and things like that, probably gets up around 10 already this year, just different technology events. I know it’s overwhelming, the amount of technology. AWS had their first security-centric only show this year. So, instead of the reinvent, it was reinforce. That was good. God, the vendors that showed up, it’s just amazing. You can have it. I put that up there.

Tim Woods:
The point here is that I get it, man. I know it’s hard to look at all this technology, but more importantly, I’m seeing from a top level, from the C levels view that they’re trying to extract understand, quantify the value that they’re getting out of the technologies that you already own. A lot of times the technology is a part of a given initiative, right? It drives those … you set your initiatives for the year and you look at both people and then what technology you have that you’re going to use to achieve those initiatives. Sometimes you’re augmenting existing technology that you have or adding onto it, or you may be acquiring new technology, but regardless, I know that there’s a lot more visibility at the top to try to extract, understand, quantify the value as you’re trying to aggregate combine and get more value out of those technologies that you own and maybe tighten that.

Tim Woods:
Reduce costs, obviously reducing costs is at the forefront. Our thing today, as we’re talking, is going to be about automation. Again, you can have the best technology on the planet, but if your people can’t use it effectively, then you’re not going to get the return out of that investment that you expect.

Elisa Lippincott:
Tim, with all of these technologies that are on the slide, and this probably doesn’t even represent everything that’s out there, how do users even begin to select and validate what they’re ultimately going to use?

Tim Woods:
It is. That is part of the challenge, for sure. It used to be, back in the day, that many have their own facilities where they could actually stand up a given vendor’s product and run through a really thorough, kind of proof of concept type testing. But nowadays, and we’re going to talk to the kind of cyber security skills shortage that we have out there, people shortage that we have resource shortages that we have. We just don’t have the cycles to stand that up and run through it like we once did. So, they turn to analysts, they turn to recommendations from their networks, they turn to their partners. Not just the technology partners like FireMon, but the value added resell partners. They’re trying to validate that.

Tim Woods:
Of course, we’re looking for recommendations from other like customers as well, to validate the value that you expected out of it. So, there’s other avenues that, the last thing in the world anybody wants to have is buyer’s remorse. Because a lot of times whenever we consume that technology, or we buy that technology, we have to live with that until it’s amortized, and so you want to make sure that you’re getting value. It has to be something … we’re going to talk about, later toward the end of the presentation today, too, about extracting more value out of the combined technologies that you own, and how that’s becoming of a larger importance as well.

Tim Woods:
One of the things I wanted to point to right quick, and for those that haven’t seen it, and today, if you go to our website, you’ll see there’s a lot of collateral there for around our automation efforts, and so I would invite you, solicit you to go and get some really good … there’s a really good automation kind of a one-on-one that’s chocked full of some really good information. But not too long ago, we did a survey called state of the hybrid cloud security, which is available under our resource section as well. I’m just pulling out a page out of that survey. We talked to, oh my goodness, probably 700 individuals like yourself. We boiled that down really to around 400 that we felt were qualified as active participants of the survey.

Tim Woods:
But this one stat, I just wanted to kind of focus in on this one stat, which I thought was interesting, because it really speaks to some of the other challenges that I know you guys are faced with. I know that because you’re telling me this. I’m not just making this stuff up. It’s coming directly from you, but our business has accelerated past our ability to consistently secure it. So, the tag to that is we’re cutting corners. It’s not that we don’t know what to do, but it’s just having the time to do it. Hey, Tim, I have 15, 16 priority ones on my plate. I can only get to so many of them, and then some of them have to be pushed off to the side. Sometimes that’s at the expense of security, unfortunately, but it’s a reality that we’re faced with, and it’s a reality that we live with today.

Tim Woods:
One of the things that I’m hearing a lot is, how do I accelerate my ability to secure the business at the speed of business? How do I gain parody with the speed of business? I mean, the business is accelerating for the right reasons, too, right? Competitive advantage, innovation, marketing efforts, things of that nature, and the technology is there today. The technology is there to allow that, people embarking on their cloud-first, their digital transformation journeys, and they’re definitely current to take advantage of that technology that is out there for the right reasons. But if I had to put a number on it, I would say, over the last, as I’ve sat back and watched this and talk to you guys, probably eight X, maybe nine X of what it was, say three, four years ago. It’s incredible acceleration that we’re witnessing today.

Tim Woods:
One thing that compounds that is this skills shortage, people shortage. Elisa, you’ve done a little research on this one. You want to speak to this a little bit?

Elisa Lippincott:
Sure. Yeah, the research that these numbers came from actually was a joint research project from ESG and ISSA. It’s just loads of information in it, but the big thing that caught my eye when I was looking through it was the last on the bottom right, where the 40% claim that the staff has limited time to work with business managers. To me, this is especially concerning, because as more and more organizations are moving to the cloud, and it’s not necessarily the security team that has any insight into what’s going on, maybe it’s even someone like me who’s spinning up something up in the cloud without … not that I would do that, but I’m just using myself as an example. But there are instances where things are being spun up in the cloud, and a lot of times the security team probably doesn’t have any insight into that and that can only lead to some pretty bad things.

Elisa Lippincott:
But that was the big one there. The other one was around just trying to find the right people. It’s one thing to say, I can’t find the right people, and maybe that is the case. Now companies are having to spend more time having to train more junior personnel, and sometimes there are organizations that may not have the time or the resources to even do that. So, it only just compounds the problem.

Tim Woods:
Yeah. I read somewhere it’s like, just in North America alone, it’s like 350,000 open jobs right now that need to be filled in the cybersecurity arena. Good market to get in. That top right one, or I’m sorry, the top left one, the 67% one, I’ve heard this directly from our customers too. Whereas, Tim, I have my best people doing some of the most repetitive mundane tasks. How can I free them up? How can I put more cycles back in their day to do some of the higher skilled tasks that I hired them to do? I’m stretched too thin. It’s a real problem that you guys are faced with. We recognize it.

Tim Woods:
It’s no wonder that sometimes human error, as a result of either lack of resources or not being able to 100% focus on one thing before I move to the next, I can do a hundred things really well, or I can do a thousand things kind of partially well, that type of thing. It’s no wonder that we see human error creeping into the equation here. That top right one, the Malaysia Air one, that just happened, that just happened as far as the records being exposed there. It’s inevitable that it’s going to happen as we’re moving too fast, if we’re cutting corners, if we don’t have the necessary resources. There’s no doubt that configuration errors can creep into the equation.

Elisa Lippincott:
Looking at all of these headlines, Tim, is it pretty much that all of these headlines, it’s a human’s fault? Is there anything else that could be a contributor, or is it pretty much, we’re human and we’re not perfect?

Tim Woods:
I think the good news here is … Well, it really points to two things as I’ve looked at this slide and with putting some things together here, one is that the, and we’re going to talk about complexity going up as well, which definitely gives waste to some of these configuration errors that we’re seeing, but part of it is just, again, we see a fragmentation, I guess, is the best word, that I’m trying to frame this in my mind so that I can articulate it. We see this fragmentation of security responsibilities. We see people taking responsibility for the security data controls that traditionally have not done so in the past. What do I mean by that? I see stakeholders, I see DevOps, I see business owners, themselves, people other than the traditional IT security teams taking responsibility for the security of the applications that are being deployed in the cloud.

Tim Woods:
That’s not to say that these aren’t smart people. They’re incredibly smart people, but they’re just not well-grounded from a security background, or security expertise background. So, you see things like this happen. I think part of the silver lining in the cloud is that, and from the last couple of AWS symposiums that I’ve been to, is that they are adding. The cloud vendors themselves are adding additional security functionality, or they’re enhancing existing security functionality. I know AWS just recently put in something specifically for S3 buckets to make sure that S3 buckets aren’t exposed due to misconfiguration. I know the Malaysia Air one that you see up there, AWS was quick to point out, that their servers were acting exactly as they should have been, they were just misconfigured. They weren’t configured properly.

Tim Woods:
The good news is public cloud providers are getting better security. No doubt about it. But we still have to make sure that the teams that are focused on the security of the applications, and the data curls around those things that we’re putting out there available for our customers to consume, that they’re the right people that are securing them, or they’re equipped with the right tools to do so. Another one here, I think that, and again, this is something that we personally witnessed at FireMon over my 12 years of tenure here, over 12 years, and it wasn’t a big surprise to me, although I think it did surprise some folks. I’m not going to go into this entirely, but it was approved change. The impacts were coming from an approved change that was implemented during, either during an ad hoc time or an approved maintenance window that caused a security impact.

Tim Woods:
83% of all unplanned network outages are caused by mistakes made during an approved change. Then of course, 70% of those were caused directly on the enforcement point technology itself. I think it was Gartner that said that 99% of impact, it wasn’t just all breaches, but 99% of impacts were due to misconfiguration. Anyway, it’s a pretty significant stat when you look at it and it tells me again, how do we become more efficient at making change? How do we vet or qualifying our changes proactively so that we’re not reacting to the self-imposed system impacts or business impacts that we’re causing?

Tim Woods:
I, originally, when put this slide together originally, it was, the dots there represented the volume increase in insecurity rules over the years that we’ve seen. I remember back in the day, even before my FireMon days, but God, and I’m going to date myself here when I start talking about trusted information system and gauntlet and raptors and things like that. But even back in the day, when I saw a firewall that had a thousand rules on it, I thought, man, that’s a lot of rules. Why does somebody need that many rules? Then later on, we saw 10,000 rules on a firewall and 12,000 rules on a firewall. Now it’s not uncommon to see 80,000 rules on a firewall, and there’s many reasons around that, but no doubt the organization becomes challenged as the volume of those rules has accelerated up into the right.

Tim Woods:
We’re seeing this with other things, too. Application deployments in the clouds outside of security’s windows. I mean, things are getting deployed, at least you mentioned you can swipe a credit card and all of a sudden you can nail up a service it’s that easy. We were engaged with a client recently, and we discovered that they thought they had two public providers, public cloud providers. In reality, they had three, and they didn’t even know they were using the third one, but things like that. But we become challenged. As this complexity gap goes up into the right, what we’re also seeing is that again, it goes back to the skill shortages and the available jobs and things like that, it’s the resources necessary to appropriately manage that has not kept pace with this acceleration of challenge, of this complexity, growing complexity.

Tim Woods:
Complexity left unchallenged, complexity left on challenge will result in outages. It will result in increased risk. There’s no doubt about it. The probability of human error creeping into the equation goes up and the probability of increased risk also goes up. We have to somehow meet this complexity gap. When I talk about complexity, I’m talking about the unnecessary complexity that creeps into your systems over time, the things that absolutely, positively serve no purpose for being there. There’s a certain amount of complexity in any good security implementation or any good security architecture, but that stuff that doesn’t need to be there, it needs to go, because it’s only going to cause us problems. If we’re not adding resources, if we’re not adding people to the problem to fix the problem, then we need to automate, or we have to make the people we have more efficient with the cycles that they’re given in order to do the tasks that they’re charged with doing.

Tim Woods:
It’s kind of a vicious circle right now. There’s so many benefits to automation. We don’t have time just in today’s webcast to go through every one of them, but if I just had to pick one, it would be, the biggest one here, I guess, would be to reduce risk at the end of the day. At the end of the day, I mean, that is what we’re trying to do. At the end of the day, we’re trying to manage risk that is acceptable by the business, to a level that’s acceptable by the business. We’re trying to make sure that our security is directly proportional to the value of the data that we have, our customer’s data that we have, or directly proportional to someone else’s efforts that might want that data and what extent they would go to, to try to grab it, and what that means to the company.

Tim Woods:
We’re trying to manage risk to a level that is acceptable. That means, that gets into people, it gets into technology, it gets into process, it gets into, more importantly, consistency. I printed this out and put it up in my office. This little picture here, guys, I just put this in for your benefit. It’s really the same picture as the slide before, it’s just shown a little different way. But I just thought it was a good, as I was through and looking for some graph ideas, I saw this and I’m like, man, I like that. This is perfect. I’m going to go ahead and print this out and put it on my wall in my office, and I did. I just think it’s neat, so I just put that in there for your enjoyment, but I think it’s the perfect picture of kind of what we’re dealing with on a day in day out basis.

Tim Woods:
Again, this isn’t just me making this stuff up. We talk to you guys. We’re pretty committed to trying to solicit feedback from our customer bases, our customer base, and making sure that you guys are extracting the value that you expect out of our solutions. But these are the things that we’re hearing. We’re definitely hearing about the skill shortage and being stretched too thin and needing to get more cycles back in the day. We’re also hearing about the efforts at the top to consolidate the tools that we have and extract more value out of them, trying to integrate with other solutions that we own as well. We’re hearing about the event saturation problems and how we feel like we’re missing too many events. Are we looking at the right events? Are we narrowing it down to those things that we should be spending our time on?

Tim Woods:
Are we letting things go that potentially could come back and haunt us later? Cloud digital transformation journeys. I’ve talked about cloud-first strategies. That’s really top of mind, I think with everybody that we speak to. Compliance is not going away by any stretch of the imagination, right? Compliance is always there, bigger than life. We have to deal … When I talk about compliance, it’s not just regulatory compliance. It’s not just, whether it’s PCI or FISMA, or HIPAA, or whatever it happens to be, given the market sector that you’re in. But I mean, we’re talking about best practice compliance, our own internal compliance. There’s two ways to embrace compliance. We either embrace it as a God, I have to do this, or we’re going to embrace this to make us better. We know there’s no silver bullets, but we need to embrace compliance in order to make our organization better overall.

Tim Woods:
I want to shift and talk about some of the triggers for security policy changes. It really breaks down into kind of two areas. One is either event-driven triggers and/or process-driven triggers that take place. Again, this is what we’re hearing directly from you guys, the customers themselves. It’s during their SOAR events or during something, any type of event that’s causing a real-time impact to the business at a given time. Obviously it’s critical, obviously the desired remediation time is as quickly as possible. But what we’re typically seeing, the reality is, is sometimes it’s taking too many hours and/or, and even days to correct the impacts that are taking place. So, we’re looking for ways to decrease that time. Same thing with proposed, with changes to existing services that we have out there, existing deployments that we have out there.

Tim Woods:
Whatever that happens to be, scaling up, trying to provide more bandwidth, compute power, adding access, whatever it happens to be, the timing is sensitive to that. Ideally, we’d like to get that done in hours. Unfortunately, it’s also taking days, sometimes a week to achieve. Sometimes we’re our own worst enemy in that arena. Then, of course, new service deployment. This is really where we really need to get back some of those cycles that we’re spending, either chasing problems and/or spending too much time on existing contextual changes to the environment. We need to have more cycles available for our new service rollout and our new business initiatives that we have in front of us. But here, again, the target is days, but unfortunately, the typical reality is that it’s taking, sometimes weeks, and you guys are telling us sometimes even months.

Tim Woods:
I want to start into automation. This is a typical workflow. I put this up here because I’m going to show you another kind of visualization of this. But anytime we embark on our automation process, it’s a good time to look at the processes that we’re wanting to automate as well. Understanding what those triggers are to automation. Where are we going to get the biggest return on our automation efforts as we go down this. I started here, because when you look at your processes, especially if you’re automating a process, you want to make sure that that process is working the way you expect it to. In other words, I don’t want to automate a broken process, right?

Tim Woods:
The only thing I’m going to do is get to failure quicker, and that’s not what I’m looking to do, but in the workflow process, there is a lot of room for enhancement. There’s a lot of room for increased functionality and acceleration along that as well. As we look at some of these security events, to the left here, as you look at the little circles, whether it’s source, and we talked about events that can cause impact to the system, whether it’s IT service management, ticket tracking and change, environmental changes to the infrastructure itself, whether those are cloud-based or in the data center, the hyper-converged data center, on-premise, wherever it happens to be.

Tim Woods:
Whether it’s in the DevOps, the deployment life cycle, the deployment of new integrations. These are all things that when they take the ticket path of change, or when it’s required to take a workflow process of change, we need to make sure that that is going through as quickly as it’s seemingly possible, and as a system allows.

Elisa Lippincott:
Tim, I see email and spreadsheets as a part of the grouping of a potential incoming request. I mean, I know I use email and spreadsheets to track some of my to-dos, and I certainly use spreadsheets to track some of my home projects here, but are companies still using email and spreadsheets to track the security changes that they’re trying to make?

Tim Woods:
Great question. I skipped over that. I’m sorry. Yeah, that is … I mean, we all use spreadsheets. God, it’s my first go-to anytime I’m trying to track something or do something. We’re trying to use … I couldn’t live without my spreadsheets. Maybe surprisingly, maybe not surprising to the audience, I mean, I still see, we still see email being used to track changes. I see spreadsheets being used to track changes and to document, to grab, to automatically document security rules and security policies and stuff. I can tell you with 100% confidence that I see those break down at scale, I see that just not being a good long-term strategy for managing change, obviously. Even homegrown kind of CRMs and homegrown ITSMs themselves, just tend to have not a prolonged future.

Tim Woods:
Especially as volume increases and people change and attrition and things take place, the people that created those or were managing them and stuff, it just breaks down over time. This is true. I’m not just speaking to small … I’m talking about very large enterprises that I speak to also. It’s not just mid-size or even small companies that are still using spreadsheets to track changes. It’s very large kind of sometimes siloed organizations that are doing that as well. But here’s what’s interesting as we look at this slide of things to automate, is that somewhere we believe, and this is from feedback from the customers themselves, is that they believe and we believe that somewhere in the neighborhood of 50% to 60% of the tasks could be automated, that we can achieve kind of a 60% reduction in some of the reoccurring more cyclic reoccurring events that are taking place, and that we could, with the right fast path those.

Tim Woods:
In other words, we can automate those tasks. This is where automation really starts to give back some dividends. This is where it starts to pay back, is we can automate some of these reoccurring frequently repeating tasks along what we’re calling a fast path so that we can automatically, if an individual needs to gain access to a marketing server and it’s not breaking any of our compliance initiatives, or our guidelines, or our golden rules, or anything like that. Why can’t we just go ahead and approve that and allow it to flow kind of automatically through the system, and then imagine having 50% to 60% of your cycles reduced so that you can now spend more time in that bottom path on the things that you need to, as far as those new service deployments and new business initiatives that do require a higher touch and require that more personal human intervention.

Tim Woods:
If we can fast track some of those things, what an impact that can have, and that’s really what we’re calling our continuous adaptive enforcement fast path, and we’re going to talk more about the FireMon multilevel security automation model here. This is a model that allows you to consume the functions of automation that best fit kind of the pace of your organization, that best fit, or more closely aligned with, say with your business needs, and the level of confidence that you have as well in your automation strategies. Now, the one that’s not showing here, the one that we’re not speaking about is the manual. We got to get rid of manual. We’ve got to go, at least to the first phase that we’re calling kind of the automated design. This is where we can look at things like from a workflow perspective. We can start looking at, a business request comes in, how do we more automatically translate that business request into technical speak, into something that looks like a rule on an enforcement point?

Tim Woods:
How do we identify that path that it’s going to take? Then, how do we put that proposed implementation into the context of the policy that we’re proposing to implement it on and go ahead and run our compliance audits and our assessments against that, in the context of the policy itself, to make sure that we’re not going to shoot ourselves in the foot? Make sure that it’s not breaking our compliance posture, it’s not adding unnecessary risk that needs to be remediated. We need to know that upfront. I want to be proactive to that as opposed to being reactive and look at those things. So, kind of as a first step of the automation. What we’re offering here is we don’t expect you to boil the ocean when it comes to automation strategies.

Tim Woods:
We expect you to look at it and say, where are the pieces of my business that’s going to give me the greatest return on some of my automation strategies and automation efforts. In the next phase of that we can say, when you start looking at things like, well, if we understand the path that we’re trying to achieve here, the access that we’re trying to grant, is it possible that I could go ahead and automatically implement that as well? Could I automatically, and grab the appropriate documentation, so that when the auditors come and point to a rule and a firewall and a policy and say, “Hey, tell me about this,” that you do see the business owner, that you do see when the rule was created, that you do see what’s the business justification for that rule, and when is the next time that that rule needs to be reviewed?

Tim Woods:
That you have access, in the context of the security policy, at your fingertips to show that QSA, that qualified security assessor or auditor that’s coming in to look at that stuff. Then the next step, and this is the part that really gets exciting, and it’s exciting for me because we start talking about, and I’ll use the term abstraction, we start talking about creating a centralized security policy that can actually be technically enforced. If I were standing in front of you today, I would ask you to raise your hands. I would say, hey, for everybody that’s in the audience today, how many of you here have absolute confidence that your security implementations are a reflection of your written security policy?

Tim Woods:
I can tell you, in the presentations that I’ve given in front of people, no one’s raised their hand yet in absolute confidence that that’s taking place. So, how do I create a centralized security policy that can actually be technically enforced? That’s where global policy controller comes into play here. We’re going to help you create a layer of abstraction, and we’re starting to focus now, we’re shifting that focus away from the actual rules, the discreet rules on an enforcement point itself. We’re going to start talking about a layer of abstraction that represents a desired security intent to protect an application, an asset or a resource. Once we define that kind of desired security intent around that, it doesn’t really matter where that application asset or resource resides, or if it moves or whatever. That security intent doesn’t change.

Tim Woods:
The things that are security might change, but the actual security intent itself doesn’t change. We’re going to create that abstracted policy that can actually be technically enforced, and we’re going to use a policy compute engine to translate this abstracted security intent into a security data control that can then be, and again, I don’t want to trivialize … there’s still a need for good security enforcement, for an enforcement layer, but we can then instantiate this on the appropriate enforcement point. That’s really what GPC is all about. It’s helping us to kind of on a per app security, create that security intent, the golden rules, the guidelines, the guide rails, security rails around that so that, whenever that request comes in, we can honor that request as long as somebody is not trying to color outside the line, as long as somebody is not trying to do something that is inconsistent with what our security policy states.

Tim Woods:
But we got to get back. Right now, guys, we’re not in a state where there is a common security policy guiding our security implementations. As I said earlier, we’ve got people, we’ve got new cloud teams that are being developed. We have DevOps, we have business owners and stakeholders and infrastructure people, and then also security IT people, all that are applying data security to our applications and assets that are being deployed, but they’re not doing it in the context of a common policy that we once kind of mandated kind of our direction, the security application guidelines, the security scenarios that, from zone A to zone B, this is what’s allowed over this protocol and things like that.

Tim Woods:
The next level here that we get to, let me advance my slide, the next level that we get here now is to say, when we get to that, the next part of this equation is, it needs to stay there or if something changes, if something changes, then we need to make sure we can adjust automatically. Let me give you an example. Let’s say somebody puts a rule and an enforcement point somewhere that blocks a business critical application, or we know that this access to this particular resource by our system, and if somebody does something, implements a rule, either accidentally, or unfortunately, maliciously, or with nefarious intent, the system needs to be able to auto detect that and set it back.

Tim Woods:
Or if access is granted to a given area from a particular zone to another zone that shouldn’t have communication, shouldn’t have access between each other, we need to be able to adapt to that too, and correct it automatically. For example, somebody creates a rule from the HVAC network to the PCI network. We know that there’s absolutely positively no reason that that access should ever exist, and so how do we set that back? But we need a system that continuously adapt to a central policy that we created that controls, that sets the guidelines for how we need to secure the implementation of our asset application resources.

Tim Woods:
I want to talk a second about the policy compute engine inside of global policy controller just really quickly, just to give you an idea, and we can dump jump deeper in this. We really don’t have time to completely go into it. But we’re going to absorb context, we’re going to create application logic from the context of the network and consume that. We’re going to create logical tags. We’re going to create the security guard rails that are necessary to basically document our application guidelines, our golden rules, our best practices, our compliance, whatever that happens to be, we need to make sure that all of that is put in place so that when a request does come through and we’re ready to compute what that access needs to look like, we need to make sure that, from a policy perspective, that it’s sound. Then automatically, we want to be able to look at, what is in scope? What is in scope for this access as well?

Tim Woods:
In other words, what are the devices that I need to touch? This can be very time consuming. If you’re a firewall administrator, you know that sometimes some of the most time consuming part of the job is just trying to understand the path to honor the business requests from point a to point B, what’s in line, what do I have to touch? How many security enforcement points do I have to touch in order to grant this access? So, I’m using Telnet, I’m using FSH, I’m jumping into the native management tools to look at the rules manually that are in place to understand what does exist and what doesn’t exist. Sometimes we see duplicate rules getting created and overly permissive access being granted in order to kind of meet the objective of the business deadlines, but we need to automate that, and we need to select what is in scope, we need to activate it, and then we need to put it in place.

Tim Woods:
But we also need to go back and recheck it consistently too. Continual assessment, continual audit of your environment is critical, because things change. As we get into cloud and we get into digital transformation, stuff like that, change happens even more frequently. We’re seeing that with the acceleration of the business. Again, this concept of continual adaptive enforcement becomes a critical component, and so we need to be able to continually enforce to make sure that if something gets set, or something gets put in play that is inconsistent with what our policy mandates, then we make sure that we reset it.

Tim Woods:
At the end of the day, it’s about efficiency. We’re trying to gain efficiency, central point of control. Again, we’re creating that level of extraction to give you central access to a policy or central view on a dashboard with a given security metrics that you need in order to make the decisions that you need to make for the business. We need to fast track those things that can be fast tracked so that we’re not … to get those cycles back to us in a day. From an integrated perspective, we have to look at how we’re bringing more value to the tools that we have in place. I always liked to look at vulnerability threat management. How many of you are taking your vulnerability scans and comparing that to the policies that you have enforced?

Tim Woods:
When we’ve looked at our route intelligence and we look at our compensating controls, and we look at the vulnerability scan data that’s coming in, how do we overlay that vulnerability scan data under our policies to say, hey, here’s the known exploits, here’s the known vulnerabilities that exist on our networks? Do we have any rules on any of our enforcement point technologies that are allowing access to these vulnerabilities? Especially if the vulnerability is a root level exploit, that a bad actor could come in a well-known threat entry point and potentially exploit that vulnerability. We need to make sure that we are cognizant of those things and very aware from a discovery perspective.

Tim Woods:
Of course, as we start looking at cloud migrations and workload portability, I think I saw a stat that said, God, Elisa, what did it say? It said, by this time, next year, over 45% of the workloads will be in the cloud. We truly believe. We’re very focused, both on the automation in the cloud, but also automation in the data centers still too, and on-premise in your hybrid infrastructure. Hybrid is not going away anytime soon. On-premise and legacy, legacy is not a bad word. It just means these are the systems that we started with, but it’s not going away anytime soon. We have to make sure that the tools that we’re using can address both.

Elisa Lippincott:
Tim, when I see this slide, Jim Croce comes to mind. Are we trying to capture a time in a bottle here?

Tim Woods:
If I can put time in a bottle, I guarantee you, you’ll buy it. I always tell everybody that it’s all about getting time back in the day to do those things that we need to. That’s the customer base is, definitely, we need more time in our days. I’m not hiring more people. I’m either allowed to hire more people, or I can’t fill the positions that I have open. Yeah, it’s time in a bottle. It’s a great song too, right?

Elisa Lippincott:
Love that song.

Tim Woods:
Real-world examples here. I’m not going to read all of those. We don’t have enough time to go through each of these. But again, these are real-world examples from you guys. Maybe some of these apply to you. I’ll let you go through, obviously, if you have questions on them. I love the one on the global hospital organization there. Just to kind of touch on that one, it was a SOAR application. What their request here was basically is they were ingesting this global malicious global IP tracking list and they wanted to be able to make sure that their security enforcement points were automatically being updated to block that list. Automating a task like that is almost a natural, and they saw a lot of return, a lot of value in being able to automate that as a task to be able to update those things.

Tim Woods:
Again, it’s just cycles in the day that we don’t have to worry about dedicating to, or providing a dedicated resource to if we can automate those things. It’s the gift that just keeps on giving when we start looking at automation. The other thing I wanted to bring up here that again, I think is of growing importance, is that of APIs. Having a strong, robust API that allows you to easily integrate with other technologies, whether you’re looking to enrich the data of another technology, or you’re wanting to take the data from that technology and bring it into something like FireMon, to further enrich some of the FireMon data or vice versa. But being able to easily exchange information between systems is of a growing importance.

Tim Woods:
As FireMon, we’re dedicated to making sure that our API and our restful services are as robust as possible. Whether you’re using it today or not, I guarantee it’s something that you’re going to have an eye on in the future. So, it’s something I would start looking at today.

Elisa Lippincott:
Tim, if any of our customers aren’t using our API, how do they get started?

Tim Woods:
Great question. Our customers that are on the line that they probably know this already, but if you don’t, the APIs, our API APIs are embedded in the system itself. If you click on the admin tab and go over to the far right and find the little question mark and you pull that down, you’ll see some stuff about documentation, but the second one down there, it’s going to say API reference. You can actually into that and you will see, for the modules that you have access to, within the platform, you will see the actual API themselves. Now, this is a Swagger … It’s not just presenting you the API data. We actually do it over something called a Swagger interface, which Swagger was acquired. It’s now the API 2.0 type interface, but you not only can you see the APIs on the platform itself, or inside the security manager solution itself, but you can actually exercise those APIs.

Tim Woods:
You can actually test them. Go down to the SiQL, the security query language. This gets a little deeper. I’d love to sit down and run through it, like an actual demo or something like that, but you could actually make a call from within security manager, expose what that SiQL query looks like, copy, cut, and paste, and bring that into an API and see what the same results look like from within the API. This is how you would make an external call to get that same information, if you wanted to extract that information to populate it into another system. So, we make it pretty easy. Now, I know that was probably too much information in a short amount of time, but again, love to walk through it with you to show you how that works.

Tim Woods:
But it’s something, if you’re not taking advantage of, it’s easy to take advantage of. It is there, and the facilities for you to gain more and to learn more about the API structure, and how that might benefit you, not just today, but into the future, because being able to exchange information between your systems can raise the total value of your security solutions, no doubt. We talked a lot about automation here and what we can automate and the pieces of automation. I just wanted to kind of revisit, very quickly, more from a logical map first, to show you where the automation sits in, and I’m kind of showing that on top. What you see in orange there, this is security manager.

Tim Woods:
This is dynamic compliance and the scale. This is where the APIs reside, this is our unified, this is where we normalize the policies and store the policies so that you can see them across. This is where the integrations with the other companies take place, and then also where we consume the enforcement technology data that we normalize and then present to you in the screens and create reports around and create compliance and assessments around and stuff like that. But a top all that, on top of all that, it’s where this security automation layer is going to reside. Then of course, the reporting, the visualization of that, we have to be able to visualize this as well, and to get the reporting information out of the system, because at the end of the day, we’re putting data into the system that at some point, you’re going to want to consume.

Tim Woods:
I wanted to show you that, but then I also wanted to revisit and just talk about the pieces of the module on the security management platform on the FireMon security platform here. Of course, at the bottom, we built a very high-performance platform, integrated elastic search. Hopefully, most of you out there today are taking advantage of the omni-search up at the top. Sometimes it’s overlooked, but that’s all driven by pre-index elastic search so that when I click on it, if I type in a vendor name or I type in an object name or I type in an IP address, that just comes back very quickly. That’s very powerful, right? Being able to just natively, using a natural language search to put something into a search bar, and it automatically come back and show you where that resides across every single security policy in your entire security real estate, your entire infrastructure, is incredibly powerful, especially when you can just type that in, and boom, it just comes back almost instantaneously.

Tim Woods:
Power of elastic search is very important. I talked about the restful APIs that you see there at the bottom. Of course, security manager is what I call the tie that binds, right? It’s the ether there. Over to the right, if you’re not familiar with LUMETA, hopefully most of you are, but if you’re not, we acquired a technology, a company, a technology company over a year ago, last April, I believe. April of last year, I believe it was. We are just so excited about LUMETA is a recovery tool, and we are working aggressively to integrate the LUMETA technology to become a core component inside of FireMon, but that doesn’t mean that you can’t use it today.

Tim Woods:
The LUMETA will go out, it’ll validate address space, it’ll validate Edge, it’ll conduct a census of the infrastructure to find out what’s out there. It’s very hard to manage those things that you don’t know about, and it’s even harder to secure them. Imagine the president, imagine secret service, trying to secure the president if they didn’t know where he was at a given point in time. Being able to understand what it is that you have to secure where it’s at, understanding where your data is, from an ingress, egress perspective, making sure that you’re doing leak path test, leak path. I’m getting tongue tied here, guys, at the end. We’re coming up on time, I guess. Making sure that data’s not going out that you don’t expect i.e., a leaky bucket, an S3 bucket or something like that.

Tim Woods:
Then of course, at the top up there, let’s talk about policy optimizer. It’s a way to do your rule recertification. Policy planner, this is workflow, right? This is workflow at its best, and this is flexible workflow, customizable workflow. And represent an area. Represents a really good place to start from an automation perspective. Global policy controller, I couldn’t be more excited about that. We talked about policy abstraction. This is where the policy extraction lives. This is where we create that desired security intent to protect those application assets and resources. This is a collaborative platform. Global policy controller represents a collaborative platform that your business owners and your stakeholders and the compliance teams and the architecture teams and everyone else, and the security folks can all become participants on this collaborative automation security intent platform. Very important.

Tim Woods:
Of course, I would be remiss if I didn’t talk about risk analyzer, the ability to basically assess risk on an ongoing basis and the ability to correlate data, risk data with the policies enforced as well. I’m going to stop right there. Elisa, I don’t know, do we have some questions from the audience? Do we have some things to talk about here in the last couple of minutes?

Elisa Lippincott:
I can check real quick. For those of you in the audience, if you have a question, please submit it below. If we run out of time, we will make sure and follow up. We do have a few in here, Tim. First one, does this solution acknowledge and interface with legacy firewall, hardware and vendors?

Tim Woods:
I would have to know, I’m looking at that too, I see that, I would have to know what the legacy firewall is. Let’s face it, guys, we’ve been around for … I told you I’ve been here 12 years. The company, the first product started shipping back in 2003, and 2004. I mean, we still have support for like Cisco Pix. We still have support for NetScreen. We have support for a lot of the legacy firewall that are out there. So, I’d have to know specifically which firewalls you’re talking about, but yeah, we have a lot of … still legacy support for systems that aren’t even maintained by the native vendors today.

Elisa Lippincott:
Great. Next one. Can I deploy multiple levels of the FireMon automation, or do I have to pick one?

Tim Woods:
No, that’s a great question. That’s kind of the cherry on top with this. As I said earlier, you don’t have to boil the ocean here. You can pick and choose the pieces of automation that best fit your kind of business cadence. Where does it fit? Where are you going to gain, reap the greatest rewards? Where are you going to realize the biggest benefits from the different pieces of the automation model that we have available for you? Definitely, it’s not a set it and forget it, but it’s also not an all or nothing type thing either. You can definitely kind of pick and choose those things that make them up, and grow with it too. It can be a crawl, walk, run. Again, as you have the ability to consume it, and absorb it, and realize value from it.

Elisa Lippincott:
But you could have some of your tasks set in the automated design phase, and you can have other ones in the zero-touch automation. So, you can just use whatever makes sense for your organization, right?

Tim Woods:
That’s absolutely correct.

Elisa Lippincott:
We have time for one more question. What are some of the misconfiguration issues that you’ve come across?

Tim Woods:
That’s a good question. Probably the biggest one that I see the most often, that is the most concerning, is overly permissive rules that get put in place. I know that this … I mean, we talked about how fast the business is moving. We talked about the velocity of business. We see business trumping security. I see it all the time. What usually happens is, in order to meet the request that’s in there, when I don’t completely know how to create the most well-defined rule or the tightest rule possible, then I’ll put in a rule that’s overly permissive in order to meet the objective of the day, with the clear intent, with the good intent of going back and correcting it later. But again, as I talked about all those different priorities on the plate, sometimes that good intent doesn’t come to fruition, and all of a sudden this overly permissive rule takes on a life of its own and gets buried.

Tim Woods:
Then all of a sudden, we have inadvertent access, or we’re allowing way more access than what actually meets the needs of the business. I did see one more question here. I’m just going to hit it since we got like one minute. Does FireMon automation also do cloud firewalls, or just the premise firewalls? We have support for Azure, or for Google cloud platform. We have support for AWS, so definitely, when I talk about hybrid, supporting the hybrid infrastructure, I’m talking about the cloud instances and the cloud security groups and the cloud firewalls as well.

Elisa Lippincott:
Great. Well, that is all the time we have for today. Thank you for attending today’s webinar. We hope you found this informative. I’d like to thank Tim Woods for his time today and for this great session. This concludes our webinar. Thank you, and have a nice day.

Tim Woods:
Thank you very much, everyone.

Read more

Get 90% Better. See How to Get:

  • 90% EFFICIENCY GAIN by automating firewall support operations
  • 90%+ FASTER time to globally block malicious actors to a new line
  • 90% REDUCTION in FTE hours to implement firewalls

SCHEDULE A DEMO