May 8, 2025

Security is the New Bottleneck for a Hyper-Augmented Workforce

Nicole Hemsoth Prickett

Security is the New Bottleneck for a Hyper-Augmented Workforce

“AI is fundamentally changing everything, and cybersecurity is at the heart of it all.” — Jeetu Patel, EVP & Chief Product Officer, Cisco

Imagine a world where the workforce isn’t just eight billion humans but eighty billion—augmented by AI agents, humanoids, autonomous apps. 

The capacity is unprecedented, the potential dizzying.

But as Jeetu Patel put it during his RSA Conference 2025 keynote, “This is going to come with a whole new class of risks that we’ve never seen before.”

The AI transformation isn’t just a productivity multiplier—it’s a security crisis in the making. Where security once meant guarding human endpoints, now it’s about defending machine behaviors that are unpredictable, evolving, and staggeringly complex. 

The architecture itself has changed, Patel said. It’s no longer the familiar three-tier stack of infrastructure, data, and application. Now it’s a multi-model labyrinth, a chaos of AI models stacked atop every layer, and each model comes with its own delightful tendency toward unpredictability. 

As Patel reminded the infosec crowd, it’s a world of “many, many models”—all potential backdoors, all potential targets.

These models don’t just amplify capacity; they amplify risk. The non-deterministic nature of AI doesn’t just mean the outputs differ based on inputs. It means the same query, the same data, can produce entirely different responses depending on the model’s recent history, its training data, the phase of the moon. 

As the Cisco exec reminded the crowd, AI models are shape-shifters, and the more complex they get, the less predictable they become. 

Therein lies the first of two great threats—safety. AI models, Patel explained, are only as useful as they are predictable. It’s fine if they hallucinate in the context of writing poetry. It’s catastrophic if they hallucinate during cyber defense. 

Cisco’s own research has shown that this is exactly what’s happening. In a study benchmarking AI vulnerabilities against the harm bench framework, Patel’s team found a near-apocalyptic success rate: 

“We had a 100% attack success rate for the top 50 categories of risk… 100% of the times we were able to jailbreak the model.” For OpenAI, that number was 26%.”

But what really sent a chill across the room, was this: The more AI models get fine-tuned, the more vulnerable they become. Fine-tuning isn’t shoring up defenses. It’s tearing down the walls, brick by brick, exposing AI’s underbelly to manipulation and attack.

So what happens when your security infrastructure isn’t just a set of predictable guardrails but a squirming, unpredictable swarm of AI models? 

The answer is that security itself has to change. It has to think in models, not endpoints. It has to recognize that models aren’t just outputs; they’re autonomous agents, free radicals, and they require a level of visibility that stretches far beyond what humans can track.

For Patel, that visibility means total model surveillance, a kind of omniscient oversight that doesn’t just monitor data flow but every single model’s behavior, across every layer, every agent, every cloud. 

The endgame is a universal security substrate, a common framework that enforces consistency not just for one model but for thousands, maybe millions.

But visibility alone isn’t enough. Patel, the consummate strategist, knows that. The next battlefield is validation, and the old red teaming playbook isn’t going to cut it. 

You can’t throw humans at a machine-scale problem and expect to win. The solution, he says, is algorithmic red teaming, where AI models probe other AI models, relentlessly hunting for vulnerabilities. It’s jailbreaking, but on a scale that humans can’t touch.

 “What we’re doing is essentially creating jailbreak simulations—except these simulations run algorithmically,” he said.

Cisco Preso RSA

Yet even with visibility and validation, there’s still the matter of what happens in the moment—the moment when an AI agent goes rogue, when the inputs get garbled, when the model starts to behave in ways that no one anticipated. 

The solution for that, Patel says, is runtime enforcement. Guardrails that don’t just catch breaches but actively correct AI behavior in real time. 

The goal isn’t just to stop attacks but to prevent them from spreading, to prevent models from feeding toxic outputs to other models, to stop AI from colluding with itself to tear down the walls from the inside.

At RSA Cisco announced an open source tool to combat all of these treats. Foundation AI, a purpose-built model architecture explicitly for security. 

Patel framed it as a pivot from AI as a blunt-force generalist to AI as a specialized surgical tool. Instead of ingesting hundreds of billions of general data points, Foundation AI zeroes in on the top five billion security-relevant tokens. The result? A model that runs on two A100s instead of 32—shrinking costs without shrinking coverage.

 “The true enemy is not our competitor; it is actually the adversary.”

Subscribe and learn everything.
Newsletter
Podcast
Spotify Logo
Subscribe
Community
Spotify Logo
Join The Cosmos Community

© VAST 2025.All rights reserved

  • social_icon
  • social_icon
  • social_icon
  • social_icon