An April Fools’ Reflection After RSAC
The RSAC Reality Check
We just got back from RSAC, and if you spent any time on the floor, one thing was impossible to miss. Every conversation, every booth, and every demo centered on AI. It did not matter what the product actually did, AI was the headline. It promised to simplify security, accelerate decisions, and reduce the burden on already stretched teams. After a while, it all started to sound the same, and when everything sounds the same, it becomes harder to take any of it seriously.
The Joke That Hit a Little Too Close
That is exactly why we leaned into it for April Fools’ Day. We introduced a fictional version of FireMon AI that adjusts security policy based on how you feel. If you are feeling bold, it opens access (permit tcp any → any eq 443). If you are overwhelmed, it locks everything down (deny ip any → any). If you are nostalgic, it re-enables a rule that no one fully understands (permit ip any → 10.10.0.0/16). The joke landed because it was obviously ridiculous, but it also felt just close enough to reality to make people pause.
The Real Problem Is Not Tools
The reality is not that security teams are making emotional decisions. The reality is that the environments they operate in often behave as if they are. Most organizations are not lacking tools. They have firewalls, cloud controls, segmentation platforms, and a growing stack of technologies that promise visibility and protection. What they lack is consistent control across all of it – especially in highly complex network security environments. Policy is distributed, fragmented, and constantly changing in ways that are difficult to track. Changes are made under pressure; exceptions accumulate over time, and access paths expand without a clear understanding of the downstream impact.
Where Things Start to Break
At some point, teams lose the ability to answer a basic question with confidence. They cannot clearly explain what is allowed across the environment or whether it aligns with what was originally intended. That is where risk is introduced, and it has nothing to do with whether AI is present or not. This is the part of the conversation that gets skipped too often. AI is being positioned as the answer to complexity, but AI without context does not reduce complexity. It processes it faster.
Why AI Alone Does Not Fix It
If the underlying policy is inconsistent, disconnected, and change over time, then AI is working from a flawed foundation. In that situation, you might get insights, but you do not get clarity. You might get recommendations, but you do not get control. In network security, control is the entire objective. Without it, everything else becomes reactive.
Resetting the Conversation: Policy Is the Control Plane
This is where the conversation needs to reset. Firewalls enforce decisions. Cloud platforms enforce decisions. Segmentation tools enforce decisions. None of them define intent across the entire environment. Policy is what defines intent, and that is why policy is the control plane. It is the layer that ensures consistency between what the organization believes should happen and what is happening across every enforcement point. When that layer is missing or fragmented, drift is inevitable. When that layer is in place, the environment becomes understandable again.
Where AI Actually Delivers Value
AI becomes valuable only after that foundation exists. When policy is normalized and consistent across firewalls, cloud, and segmentation, AI has the context it needs to produce meaningful outcomes. FireMon operates at that layer as the policy control plane, which allows AI to analyze policy across systems and identify real exposure. It can surface gaps that are not visible when tools are viewed in isolation, and it can prioritize what actually needs attention so teams are not overwhelmed by noise. The result is not more data or more dashboards. The result is clear direction, which allows teams to move from analysis to action.
Back to the Joke
Going back to the April Fools’ joke, the idea of emotion-driven security is obviously not real, but environments that behave unpredictably due to lack of policy control are very real. Rules are opened to solve immediate problems, temporary fixes become permanent, and legacy policies remain untouched because no one wants to risk disruption. Over time, the system starts to feel inconsistent and reactive, even when the people managing it are highly capable.
The Bottom Line
That is not a failure of effort or expertise. It is a failure of control.
AI does not solve that on its own. If anything, applying AI to a broken foundation risks accelerating the problem. However, when AI is applied to a strong policy control plane, it becomes a force multiplier. It helps teams understand what is wrong, how their environment compares to others, and what actions will have the greatest impact. That is where AI starts to deliver on its promise in a way that is practical and grounded.
The industry does not need more AI claims. It needs a clearer understanding of where AI fits and where it does not. The fundamentals have not changed. Security still depends on knowing what is allowed, why it is allowed, and whether it should remain that way over time.
Policy is not a side conversation in that equation.
Policy is the control plane.