The most recent post on our blog noted that understanding your organization’s exposure to risk is no small task. I have seen enterprises attempt to manage risk through feel or intuition, or simply reacting when executive leadership has read about the latest breach of the week and wants assurance that they aren’t at risk for the same calamity.  Fortunately, enterprises today are attempting to analyze and measure risk under a more formal process. Many attempt to do so by running vulnerability scanners against parts of their network or the network in its entirety at some predetermined interval. In both cases, scans are run, vulnerabilities are identified and possibly prioritized based on asset value, patching activities are scheduled over the next month or quarter, and the event repeats itself. Some organizations might even take the results of these efforts and assign a score, value or state to their risk posture.

The holistic measurement of risk described above simplifies risk within today’s networks. Truly understanding your actual risk posture is much more complex. Different threats and different assets define different risks. Risk is also constantly changing, constantly in flux in the enterprise environments we work in today. With M&A activity, strategic partnerships being formed or abandoned, new data centers being brought up, data centers being consolidated or IT functions being moved into the cloud, risk is a never ending moving target in most enterprise environments. Considering the standard process where an organization runs a vulnerability scanner at set intervals and scores their risk posture based off the actions completed from this event, it’s easy to see how this score is not truly reflective of the true state of the organizations risk.

Consider the example where a security group may run an enterprise scan at the beginning of each month and then schedule remediation actions for the next three weeks. In the second week of the month, a business group requests a new VPN connection to a newly formed business partner. This access requires connectivity from the new partner network to a DMZ web server farm that is protected by a firewall cluster. The web farm is a front end to an internal financial database that is protected by another cluster of firewalls. The monthly process that the organization follows does not allow them to react to the new variable that has been created within their risk posture. Furthermore, even if the organization were to scan against this newly created connection, the scanner would simply be blocked by the firewall clusters. The scanner does not have awareness of the firewall configuration policy and the context of how data flows through the networking devices, firewall and any other subsequent network security controls related to the web server front end and the back end database servers. This speaks to the importance of factoring the full context of network security controls and data connectivity when analyzing risk, as we have previously covered in this blog.

Analyzing and scoring risk based solely off the enterprise wide scanning or patching efforts doesn’t provide an organization the most accurate measurement of what their true risk posture is. In the second part of our post, we will discuss a better approach to gain a more accurate and real-time awareness into what an organizations risk state truly is.