In this brave new world, breathless proclamations about agentic AI are everywhere.
What's missing, notably, is much in the way of practical guidance on moving from abstract promises to tangible business outcomes.
Here’s the difficult truth, the one we started hearing about in the long-ago age of “big data” (which still presents plenty of old-school challenges in new-world settings): The average enterprise is drowning, this time in even more data.
There's an irony here, of course. The more valuable the insights locked in those endless streams of emails, slack threads, CRM entries, PDFs, and video archives, the harder they become to actually use. Yet we keep adding more to the pile and waiting for the magical AI tool to come along and sort things.
Because humans, clever as we might be, simply cannot parse it fast enough. And so, crucial signals slip past, overlooked and unused, like distant transmissions in a static storm.
But what if we stopped expecting people to tame this torrent alone and actually provided something that feels pretty darn close to magic?
This is agentic AI we’re talking about here. These are machines built not merely to ingest but to autonomously parse, reason, reflect, and take action, with the goal of distilling chaos into actionable insight.
All of this is happening already according to Andy Pernsteiner, Field CTO at VAST Data, and Adel El Hallak, Senior Director of Product at NVIDIA AI.
The duo recently illuminated the path to this agentic enlightenment with a refreshingly concrete walkthrough of how this works in action with real-world use cases front of mind,
A Walkthrough of a Workflow
Given the agentic AI focus, the session kicked off with NVIDIA Blueprints, which Adel El Hallak describes as collections of microservices designed to deliver specific use-case solutions, removing the guesswork of DIY approaches.
“Deploying AI is never just about a single model; it requires a coherent pipeline of models and tasks.”
These blueprints are continually updated, a necessity given the constantly evolving nature of enterprise data. Practically, this involves repeatedly extracting data (PDFs, videos, Slack messages and the like) and converting these into vector embeddings. Embeddings grant AI something approaching human semantic understanding, which means the agent can interact naturally and effectively with enterprise data.
Pernsteiner reminds us that to handle the volume and variety of AI data, enterprises need seriously robust infrastructure.
VAST Data's operating system for AI, as Andy describes it, is simply a “unified platform storing structured and unstructured data, enabling direct execution of AI computation.” He adds that this simplicity reduces complexity and latency, allowing easy deployment of NVIDIA Blueprints via the integrated Data Engine.

A Real-World Scenario for Context
To put all of this in context, Pernsteiner highlighted a sales enablement workflow as an ideal use case to present because it's relatable and universally challenging.
"Sales executives often struggle initially due to scattered data sources," he says, and points to an agent they developed that consolidates data from internal systems (CRM, Slack, email) and external sources (web searches), transforming those disparate bits into actionable insight.
Initial AI responses are rarely flawless. Adel emphasizes AI reasoning models as inherently iterative—they reflect, refine, and improve their output continuously. The "re-ranking" microservice is key, constantly validating results against each other to enhance accuracy, crucial for building user trust in AI.

Adel El Hallak highlights practical model usage strategies to reduce computational costs. Reasoning models excel in planning but are slow and expensive for straightforward reporting. Specialized models, such as Llama 3.1 70B, handle these tasks efficiently, reducing compute costs and response times—key for real-world adoption.
Another point to touch on is that bridging multiple AI agents isn’t trivial, but the trick is using something like the Model Context Protocol (MCP), a clever interop standard first introduced by Anthropic.
MCP neatly standardizes the chaos of agent-to-agent and agent-to-data communications, letting agents query, fetch, and stitch together data from varied sources without the usual friction of custom integrations.
Practically, this means VAST and NVIDIA can quickly plug specialized services (think video summarizers or web-search bots) straight into existing workflows without breaking stride.
And by the way, reasoning models are powerhouses but that strength comes at a cost.
For generating final outputs like reports or summaries, NVIDIA's pipeline swaps out heavier reasoning models for lighter-weight, efficient LLMs. This approach dramatically reduces computational overhead and latency. Same accurate results, less GPU muscle required.
The pair add that workflows and reasoning alone aren’t enough: enterprise-grade AI must prioritize security, noting that embedding data inherits permissions from the original data, ensuring the AI respects existing governance. Enterprises can maintain robust security seamlessly integrated into their AI processes.
Rather than one "uber-agent," El Hallak envisions multiple specialized AI "digital workers," each optimized for specific tasks. This collaborative approach mirrors real-world teamwork, translating AI capabilities into practical, efficient workflows.
Ultimately, the pair argue that agentic AI isn't about abstract capabilities, it's about tangible improvements. Getting more done faster, securely, and efficiently.
Andy and Adel’s insights demonstrate not just what’s possible, but exactly how enterprises can translate promise into real-world impact.