There was a time when cybersecurity teams were viewed as the corporate equivalent of air-traffic controllers in a snowstorm — necessary, but slowing everything down.
In the age of artificial intelligence, that logic has flipped.
“You have to have security to get AI and get the innovation,” Peter Bailey a senior cybersecurity executive nine months into his tenure at Cisco — said in a recent interview with David Bombal. “Security is now a business enabler.”
It is a striking shift in tone for an industry long accustomed to being told to get out of the way. But as companies race to deploy AI agents capable of writing code, analyzing financial data and autonomously calling other software tools, chief information security officers — CISOs — are losing sleep.
And in some cases, they are slamming the brakes.
The Sleepless CISO
For years, security leaders complained about “operational toil” — too many tools stitched together into fragile defensive frameworks. Today, their concern is more existential: employees unleashing AI systems that can access sensitive data, connect to internal applications and act with a degree of autonomy that traditional controls were never designed to manage.
What worries them is not merely experimentation. It is exposure.
Employees are building internal AI agents that inadvertently tap into personally identifiable information. Developers are downloading open-source models from public repositories. Some are spinning up model servers using the Model Context Protocol (MCP), a rapidly adopted standard introduced by Anthropic, without embedding authentication or access controls.
“The spec came out, and within two months, MCP servers were popping up everywhere,” Peter said. “Really cool and easy technology to build — but with nothing about security built into it.”
In a traditional enterprise network, access might be static: once granted, an employee can reach email or internal applications indefinitely. But grant that same persistent access to an AI agent — software that can read, write, call tools, scrape data and update itself — and the risk profile changes dramatically.
A static open door, he argues, no longer works.
“It has to be dynamic.”
The Expanding Attack Surface
Security professionals speak in metaphors. One of the most enduring is the “kill chain” — the sequence of steps an attacker follows from reconnaissance to data exfiltration.
Each step has a cost. And cost determines scale.
Historically, sophisticated intrusions — the kind associated with nation-state actors — required patience and capital. Attacks like the breach of SolarWinds or the ransomware strike on Colonial Pipeline involved months of reconnaissance and lateral movement inside networks.
That kind of operation is expensive.
AI threatens to collapse that cost curve.
Peter describes a near-future in which cybercriminals purchase “agent packages” the way they now buy ransomware-as-a-service kits — highly specialized AI systems capable of identifying vulnerabilities, crafting exploits and evading detection at machine speed.
“Sophistication, prevalence, speed,” he said. “If attackers out-innovate, it could get very bad very quickly.”
In this scenario, the asymmetry deepens. The top 1 percent of enterprises may have robust controls. The rest are “somewhere on a journey.” AI compresses the timeline of that journey.
Jailbreaks and Shadow AI
The vulnerabilities are not theoretical.
Studies in recent months suggest that a vast majority of large language models remain susceptible to prompt injection and jailbreaking techniques — methods that bypass guardrails to extract sensitive information or trigger unintended behavior.
At the same time, organizations face what security teams call “shadow IT,” now reborn as shadow AI: unauthorized agents, model servers and integrations running outside formal oversight.
To counter that, Cisco has expanded its internal red-teaming — using automated tools and adversarial testing frameworks to attack models and agents before malicious actors do. It is also cataloging internal MCP servers and scanning models for provenance, malicious code and supply chain risks — a concept borrowed from software bills of materials.
“You wouldn’t pull in a Docker container without scanning it,” Peter said. “Why would you do that with a model?”
Cisco’s AI Defense platform, introduced more than a year ago, now aims to validate models before deployment and apply guardrails that inspect prompts and outputs semantically — not just syntactically — looking for anomalous intent or policy violations.
The next frontier is identity.
Humans have identities inside enterprise systems. Machines do too. AI agents, Peter argues, must as well — authenticated, authorized and continuously monitored. Access should be leased, not permanent. Behavior should determine privileges, not assumptions.
The underlying philosophy echoes a long-standing security doctrine: zero trust.
Zero Trust, Reimagined
Zero trust has been discussed for years — often criticized as aspirational. In the AI era, Peter believes it becomes unavoidable.
“Total visibility. No blind spots. Least-privilege access,” he said. “All of that has to become true.”
The complication is encryption. Much enterprise traffic is now encrypted end to end, rendering traditional firewalls insufficient. AI workloads introduce new seams — between agents, APIs, model servers and external data sources.
Cisco, which owns substantial portions of the networking stack, is embedding AI-specific inspection capabilities into its SD-WAN and secure access products. The idea is to inspect intent and behavior closer to the network edge — before machine-speed activity cascades downstream to a security operations center.
Peter likens the approach to an immune system: distributed, adaptive and embedded, rather than bolted on.
In high-stakes environments — robotic manufacturing lines, AI-assisted surgery — milliseconds matter. Latency cannot be sacrificed to security, nor security to latency. Both must coexist.
The Innovation Paradox
The tension is palpable. On one side, executives see AI transforming productivity, automating software development and unlocking data long trapped in dashboards. On the other, CISOs see open doors.
The instinctive reaction has been “shields up” — restricting deployments, sandboxing agents and imposing friction.
But friction slows innovation. And companies that hesitate risk being outpaced.
“It can absolutely be done safely,” Peter said. “But you need your runbook. It starts with people and process. Technology is the last thing you do.”
In other words: governance before deployment.
A Career in the Crossfire
Asked whether young technologists should enter cybersecurity, Peter does not hesitate.
“It’s mission-driven,” he said. “It’s a team sport.”
The adversaries — particularly nation-states — possess asymmetric advantages. Defense requires collaboration, even among competitors.
For his own children, he offers broader advice: be curious, creative and fluent in AI tools. The future workforce, he believes, will not be replaced by AI — but divided between those who can harness it and those who cannot.
The irony is that the very tools promising exponential productivity also demand exponential vigilance.
The age of AI was supposed to be about speed. Instead, it may be about balance — between autonomy and oversight, innovation and restraint.
Security is no longer the afterthought.
It is the prerequisite.
Watch the entire interview with Peter Bailey
