Back to resources

Two Security Conversations Dominated RSAC 2026 - And They're Actually One Problem 

March 2026  /  8 min. read   /  
Art Poghosyan

There's a particular kind of energy at RSAC when the industry is genuinely working through something hard. Not the buzz of a new vendor category, not the noise of a fresh acronym. Every conversation I had at RSAC 2026, whether in investor roundtables, customer meetings, or on the expo floor, eventually landed in the same place.

Two topics kept coming up, and on the surface, they looked like two separate security priorities. The more I compared notes throughout the week, the more convinced I became that they're actually two expressions of the same underlying problem. 

First, the Adoption Reality Check 

Before you can talk about securing AI, you have to be honest about why organizations are deploying it, and how fast that's actually happening. 

Enterprises aren't experimenting with AI anymore. They're running it in production. Customer service teams are using AI agents to handle tier-1 support at scale. Developers are shipping code with AI copilots baked into every step of the pipeline. Finance teams are automating reconciliation, reporting, and vendor management. Operations teams are building agentic workflows that reach directly into ERP systems, procurement data, and internal knowledge bases. 

And here's where it gets structurally interesting: these workloads are fundamentally different from traditional software. AI agents are non-deterministic. They reason. They make decisions. They initiate actions based on context, not just instructions. That's the point - and also the problem. 

Because most enterprises are building this hybrid organization, part human, part agent, on a security infrastructure that was designed for a world where everything was deterministic, and everything that needed access was a person. 

That mismatch is the structural wall the industry ran into at RSAC 2026. 

The Two Conversations at RSAC 

Walking the floor and sitting in the rooms this year, two themes dominated every substantive conversation: 

  1. How do you manage agentic identity, access, and privilege? 
  2. How do you secure data in AI and agentic AI environments? 

At first glance, these feel like separate problems owned by separate teams - IAM on one side, DSPM and AI-specific data controls on the other. Two distinct budget line items. Two separate vendor conversations. 

But that framing is the problem. 

The Common Thread: Identity and Privilege Are the Foundation 

Every agent, regardless of what it's doing or where it's deployed, has to access something to be useful. And the moment it accesses anything - data, systems, APIs, internal tools - a privilege decision has already been made. The question is whether it was made deliberately or by default. 

That access decision, made at the identity and privilege layer, is the first domino. Everything downstream depends on getting it right. 

If an agent has broader access than its use case requires, exposure already exists before a single data security tool enters the picture. If you have no visibility into what an agent accessed, when, and under what context, your ability to detect anomalous behavior or prevent data exfiltration becomes reactive at best. 

Data security tools for AI - the ones designed to block inference attacks, prevent exfiltration of sensitive outputs, and enforce data residency - are real and necessary. But they only work when layered on top of a well-governed identity and privilege foundation. Without that foundation, you're patching holes in a wall that's already been bypassed. 

This was the realization surfacing across conversations at RSAC: identity and privilege management isn't just an IAM problem anymore. It's the prerequisite for every other AI security control to function. 

Three Silos. One Problem. 

What RSAC 2026 made clearer than any previous year is that three security domains - previously managed in isolation - are now converging into a single challenge: 

Agentic Identity - managing the high-velocity privileges these agents require to function, at machine speed, across thousands of simultaneous sessions. 

Data Security - preventing inference attacks and exfiltration as agents move through sensitive datasets, often in ways no human ever directly reviews. 

Agentic Endpoint Security - providing visibility into the AI applications and coding agents running directly on endpoints, where teams increasingly need to deploy tools at speed without losing sight of what those tools are actually doing. 

In a hybrid organization, these aren't three separate problems. An agent with the privilege to access data at the endpoint has the privilege to compromise the enterprise at machine speed. The attack surface isn't additive - it's multiplicative. 

One technical reality kept surfacing underneath every one of these conversations: Model Context Protocol. 

MCP is how agents connect to external data sources and tools at runtime. It's the access layer that makes agentic AI actually functional - letting agents query databases, call APIs, and interact with enterprise systems dynamically and in context. That's precisely what makes it so consequential from a security standpoint. 

Right now, MCP largely operates outside the visibility of traditional security controls. There's no standardized governance model for it. Most organizations don't yet have a clear picture of which agents are connecting to what, through which MCP integrations, with what level of implicit trust baked in. 

This isn't a separate problem from identity, data security, and endpoint visibility. It's where all three intersect in a single, ungoverned layer. If you're thinking about agentic identity and privilege management and you're not thinking about MCP, you're governing the agent but not the connections the agent is making at runtime, at machine speed, across your entire environment

The Static Control Problem 

At RSAC, I kept hearing about "better vaults." Vendors are pitching more sophisticated ways to lock credentials in a digital box. And I understand the intuition - if the keys are secure, the agents can't misuse them. 

But the industry is starting to name what I'd call the vault fallacy: the idea that you can secure a hybrid organization of thousands of non-deterministic agents by simply locking their keys in a digital box. That architecture hits a structural wall the moment you try to scale. Agents don't operate on a schedule you can predict. They don't request access the way a human logs into a system. They operate at runtime, dynamically, in parallel, often across environments that span multiple vendors and clouds. 

A vault is a static control. Agentic AI is a dynamic behavior. The mismatch is architectural, not operational. 

From Detection to Remediation, and Where It Has to Happen 

A second major shift was equally clear on the floor: security tools that "analyze and find" issues are now incomplete. 

The traditional security posture management (SPM) model - surface the risk, shows it on a dashboard, let a human decide - was built for a human-speed environment. In a world of non-deterministic AI, by the time a human reviews that dashboard, the agent has already moved. The damage may already be done. 

The future standard requires security to operate at the runtime execution layer - where the system can automatically recommend or perform remediation the moment an anomaly is detected, not after a ticket is opened. 

This point was sharpened in my conversations with peers who'd spent time in roundtables with some of the larger platform players. The consistent message: context is what makes automated remediation possible. Not just security telemetry - but IT infrastructure data, application behavior, and multi-vendor environment visibility, all synthesized in real time. The SIEM and the data lake aren't just logging infrastructure anymore. They're the substrate that makes context-aware, runtime security possible. 

The Roadmap: Crawl, Walk, Run 

CISOs aren't flipping a switch on AI governance - and frankly, they shouldn't. Trust in autonomous systems is built incrementally, the same way autonomous vehicles evolved from adaptive cruise control to lane assist to hands-free highway driving. Each step required evidence, not promises. 

The practical path forward looks something like this: 

Crawl - Identity Parity. Start by assigning managed identities to every agent in your environment. This single step eliminates the shadow AI blindspot - the sprawl of agents operating with no identity, no audit trail, and no access governance. You can't manage what you can't see. 

Walk - Context-Rich Governance. Build the data layer that gives your security tooling the context it needs to make intelligent decisions. Security telemetry alone isn't enough. You need IT infrastructure data, application behavior signals, and visibility across multi-vendor environments - all feeding a foundation that can reason about what's normal and what's not. 

Run - Zero Standing Privileges. The end state is a runtime execution model where zero standing privileges (ZSP) is the default. Access is minted at the moment it's needed and destroyed when the task is complete - at machine speed, matching the velocity of the agents themselves. This is the architecture that eliminates the vault fallacy structurally, not just operationally. 

The “So What”

If I had to distill what RSAC 2026 told me into a single working thesis, it's this: 

The organizations that will deploy agentic AI effectively (without creating security tech debt they'll spend years cleaning up) are the ones that treat identity and privilege management as the foundation, not an afterthought. 

Not because data security tools don't matter. Not because endpoint visibility isn't real. But because none of those controls operate in a vacuum. They all depend on knowing who or what is accessing what, with what level of privilege, and under what context

What looked like two separate conversations at RSAC — securing agentic identities and securing data for AI workloads — was actually the same conversation, showing up in two different rooms. The practitioners who walked away with the clearest path forward were the ones who stopped treating them as separate problems. 

The hybrid organization is already being built. The question is whether the security architecture being built alongside it can actually keep up.