Cisco Secures Agentic AI at RSA 2026: Zero Trust for Non-Human Identities
April 4, 2026 · 8 min read · By Connie
TL;DR
At RSA Conference 2026, Cisco announced a comprehensive security stack for AI agents — extending Zero Trust to non-human identities for the first time. The suite includes cryptographic agent identity management, an MCP gateway for intent-aware monitoring, AI Defense Explorer for pre-deployment red teaming, and DefenseClaw as open-source runtime security. The business problem: 85% of large enterprises test AI agents, but only 5% deploy them at scale. Security is the wall. Cisco is building the door.
The agentic AI era has created a security problem that existing frameworks were not designed to solve. Traditional Zero Trust security was built for human identities — employees, contractors, partners logging into systems. AI agents are something else: non-human identities that autonomously execute workflows, call APIs, read sensitive data, send communications, and modify system state — often without a human reviewing each action.
This gap is why enterprises are stuck. The productivity case for AI agents is clear. The deployment path is not, because security teams cannot approve production access for entities that lack established identity, access controls, and behavioral monitoring. Cisco's RSA 2026 announcement is the most comprehensive response to this problem yet from an established enterprise security vendor.
of large enterprises are testing AI agents
But only 5% have successfully deployed them at production scale — primarily due to security, identity, and access control concerns.
Source: Cisco RSA 2026 keynote data
Three-Pillar Security Architecture for Agentic AI
Cisco organized its RSA 2026 agentic security announcements around three distinct problem areas. Each pillar addresses a different phase of the agent lifecycle — identity and access, pre-deployment testing, and production runtime.
Zero Trust Access for AI Agent Identities
- •Register each AI agent with a unique cryptographic identity in Cisco Identity Intelligence
- •Map every agent to an accountable human owner who is responsible for its behavior
- •Enforce granular, time-bound permissions via Duo IAM (minimum required access, auto-expiry)
- •Route all agent traffic through MCP gateway for intent-aware monitoring
- •Agent discovery and tool visibility dashboard for full auditability
AI Defense: Pre-Deployment Hardening
- •AI Defense Explorer: self-service dynamic red teaming and adversarial testing
- •Multi-turn attack simulation: prompt injection, jailbreaks, data exfiltration attempts
- •Agent Runtime SDK: embed security policies and guardrails directly at build time
- •Supports AWS Bedrock AgentCore, Google Vertex Agent Builder, LangChain
- •LLM Security Leaderboard: transparent risk ratings per model for procurement decisions
Open-Source Framework and SOC Automation
- •DefenseClaw: open-source agent inventory and runtime security, integrates with NVIDIA OpenShell
- •Splunk Agentic SOC: specialized triage, response, and threat analysis agents
- •Detection Studio: GA — automated detection rule generation from threat intelligence
- •Malware Threat Reversing Agent: GA — automated malware analysis at machine speed
- •Triage and Automation Builder Agents: targeting June 2026 GA
The MCP Gateway: Why This Architecture Matters
Model Context Protocol has become the standard plumbing for AI agents. Every major agent framework — LangChain, CrewAI, AutoGen, AWS Bedrock AgentCore, Google Vertex — uses MCP to connect models to tools, databases, and APIs. In 2026, MCP hit 97 million installs, cementing its status as the de facto standard.
The problem is that most MCP implementations allow direct agent-to-tool connections with no inspection layer. An agent connects to a tool, invokes it, and gets results — and the security team sees nothing unless they build custom logging. Cisco Secure Access's MCP gateway inserts a controlled inspection point between every agent and every tool.
What makes this "intent-aware" rather than just "traffic inspection" is that the gateway understands what the agent is trying to do — read a file vs. modify it, query a database vs. export it, send a message to one person vs. broadcast to all contacts. Intent-aware policies can block agent actions that match the shape of a data exfiltration or privilege escalation, even if the individual API calls look technically valid.
For security architects, this is the missing piece that makes production AI agent deployment viable. The access controls that protect human employees from accidentally or maliciously exfiltrating data can now extend to the non-human agents they deploy.
Prompt Injection: The Threat That Makes AI Defense Explorer Critical
Prompt injection is the primary attack vector against AI agents. An attacker embeds malicious instructions in content the agent will process — a document, an email, a web page — that hijack the agent's behavior. Because agents act autonomously, a successfully injected instruction can cause the agent to exfiltrate data, send phishing messages, modify files, or take other destructive actions without the human operator realizing it happened.
AI Defense Explorer addresses this with multi-turn adversarial testing: the platform simulates not just single-turn attacks but extended attack sequences where an adversary attempts to gradually shift agent behavior across multiple interactions. This is closer to how real prompt injection attacks work in practice.
The Agent Runtime SDK then allows developers to embed the policies validated during testing directly into the agent's runtime behavior — guardrails that travel with the agent rather than relying solely on perimeter controls. It supports AWS Bedrock AgentCore, Google Vertex Agent Builder, and LangChain at launch.
Product Availability Timeline
| Product | Status | Target Date |
|---|---|---|
| Detection Studio | Generally Available | RSA 2026 |
| Malware Threat Reversing Agent | Generally Available | RSA 2026 |
| Exposure Analytics | Launching | April–May 2026 |
| SOP Agent (Splunk) | Launching | April–May 2026 |
| Federated Search | Launching | April–May 2026 |
| Triage Agent (Splunk) | Target GA | June 2026 |
| Automation Builder Agent | Target GA | June 2026 |
| Agent Runtime SDK | Available | RSA 2026 |
| AI Defense Explorer | Available | RSA 2026 |
| DefenseClaw (open source) | Available | RSA 2026 |
Who Needs This Now
Enterprise security architects
If your organization is evaluating AI agent deployment and the security team has blocked approval, Cisco's Zero Trust agent identity framework provides the access control and audit trail architecture needed to satisfy enterprise security requirements.
Platform engineering and DevOps teams
AI Defense Explorer and the Agent Runtime SDK shift security testing earlier in the development lifecycle. Teams building agents on LangChain, AWS Bedrock, or Google Vertex can now validate adversarial resilience before code reviews — not after incidents.
CISO offices at Fortune 500 companies
The MCP gateway + intent-aware monitoring combination provides the visibility layer needed for compliance reporting. If your organization is subject to SOC 2, HIPAA, or FedRAMP, the ability to log every agent action with structured audit records is a hard requirement for production deployment.
AI-native startups building agentic products for enterprise
If you are selling AI agents into enterprise customers, integrating with Cisco's security framework (especially the Agent Runtime SDK) signals security posture to procurement teams. Enterprise buyers increasingly ask whether your agents are compatible with their Zero Trust architecture.
Research AI Agent Security With HappyCapy
Use HappyCapy to research the latest enterprise AI security frameworks, compare vendor approaches, and draft security requirements for your AI agent deployment.
Try HappyCapy FreeFrequently Asked Questions
What is Zero Trust for AI agents?
Why do only 5% of enterprises successfully deploy AI agents at scale?
What is the MCP gateway that Cisco announced?
What is Cisco AI Defense Explorer?
What is DefenseClaw?
Security Is Now the Enabler, Not the Blocker
The narrative around enterprise AI adoption has shifted. A year ago, the question was whether AI agents were capable enough for production use. Today, the capability question is largely answered — agents can do the work. The remaining question is whether the security infrastructure exists to deploy them safely.
Cisco's RSA 2026 announcements represent the most complete response to that question from the enterprise security ecosystem. Zero Trust for non-human identities, intent-aware MCP gateway enforcement, pre-deployment adversarial testing, and open-source runtime security together form an architecture that can actually satisfy a security review board.
The 5% production deployment rate is going up. The question is how fast, and which organizations build the security infrastructure first.