Thought Leadership
Nov 5, 2025

Shared Everything Podcast

What is Shared Everything?

Authored by

Nicole Hemsoth Prickett

Welcome to the Shared Everything podcast with new episodes throughout the week.

Below are our episodes to date. Be sure to subscribe using your favorite podcast provider or listen using the player.

Most Recent

From Detection to Reasoning: Scaling the Infrastructure of Autonomous AI

In this episode of the Shared Everything podcast, Nicole Hemsoth Prickett speaks with Norm Marks, VP of Automotive at NVIDIA, about how autonomous systems are evolving from detection to prediction and now reasoning-driven AI. Marks explains how that shift is driving massive increases in GPU scale, synthetic data generation, and simulation, forcing companies to rethink infrastructure as training pipelines expand across hybrid datacenter and cloud environments. The result is a new class of AI factories built to train and operate autonomy at industrial scale.

Episode 1

What Comes After Compute? An OS for AI

For the inaugural episode of Shared Everything, Nicole sits down with Renen Hallak, CEO and founder of VAST Data, to discuss what comes after compute. If we erased all technical debt and legacy architecture and started building AI infrastructure from scratch in 2025, what would it look like? Hallak isn’t talking about cloud-native apps or hypervisors—he’s talking about power plants, 500kW racks, and a habitat for autonomous agents that don’t just execute tasks but persist, learn, and communicate over time.

Episode 2

AI is Challenging the Old Efficiency Rules

In this episode, IT energy efficiency expert Jonathan Koomey joins to unpack AI’s growing power demands, the limits of infrastructure, and the future of computing efficiency. From the origins of Koomey’s Law to today’s data center crunch, we explore whether technology can keep doing more with less—or if AI is rewriting the rules.

Episode 3

The Vector Crisis Is Coming: Jeff Denworth on AI’s New Metadata Madness

In this episode Jeff Denworth, co-founder at VAST Data, talks to Nicole about why the Hadoop-driven Big Data hype feels ancient and why metadata has quietly become AI’s new frontier. From how vectors and embeddings have made metadata complex and mission-critical, to infrastructure challenges at exabyte scale, we unpack the implications of retrieval-augmented generative AI (RAG). It’s a sharp, insightful conversation about building infrastructure for an increasingly agent-driven future—and who (or what) might soon control the enterprise workforce.

Episode 4

Pipelines, Power, and Parallel Worlds: Inside the Digital Twin Stack

Experts Wes Brewer (Oak Ridge National Lab) and Adrian Jackson (EPCC) discuss the evolving landscape of digital twins, particularly in the context of supercomputing and AI. They explore the distinctions between digital twins and traditional simulations, the real-world applications of digital twins, and the infrastructure challenges faced in their implementation. The discussion also delves into the potential of distributed digital twins and the role of AI in enhancing digital twin workflows, concluding with insights on future workloads and applications.

Episode 5

TACC's Dan Stanzione on AI, Power, and the Future of Supercomputing

Nicole, TACC Executive Director Dan Stanzione, and VAST Data's Don Schulte discuss the evolution of the Texas Advanced Computing Center and its role in high-performance computing (HPC). Dan highlights TACC's history, including the transition from Stampede to Stampede 2 and the impact of AI on power consumption and cost. They discuss the upcoming Horizon system, which will replace Frontera, featuring 4,000 Nvidia GPUs, 900,000 CPU cores, and half an exabyte of solid state storage. The conversation also touches on the importance of data management, the shift from batch-oriented workflows to real-time data assimilation, and the potential of emerging technologies like photonics and quantum computing.

Episode 6

Europe Gets Real About AI Sovereignty and Neoclouds at GTC Paris

In this episode from GTC Paris, Nicole chats Andy Pernsteiner, Global Field CTO at VAST Data, for an insider's view of Europe's quickly evolving AI landscape. Andy shares candid insights on the shift from heavy-lifting AI infrastructure builds toward practical, profitable services built on those investments. Sovereign clouds take center stage, as Andy unpacks Europe's increasing emphasis on secure, traceable, and auditable data architectures designed around tight regulatory frameworks and national boundaries.

Episode 7

The New Shape of Life Sciences Systems

Nicole and Dr. Subramanian Kartik traces the tectonic shifts in computational infrastructure driven by the rise of life sciences as a data-intensive domain. From the advent of long-read nanopore sequencing to the seismic influence of AlphaFold and cryo-EM, Kartik outlines a field no longer tethered to traditional HPC assumptions. As genomic data explodes and microscopes spill out petabytes, he argues, the industry must abandon legacy parallel file systems in favor of architectures purpose-built for random I/O, GPU-rich workflows, and relentless uptime.

Episode 8

From Supercomputers to the Frontlines of AI Inference: Glenn Lockwood on Infrastructure That Lasts

In this episode of Shared Everything, Glenn Lockwood, just named Principal Technical Strategist at VAST Data, shares what decades at the bleeding edge of large-scale systems design have taught him about architecting for an AI future that refuses to stay put. From building the first all-NVMe 30PB Lustre file system to designing Azure’s training clusters, Glenn walks us through why performance alone is no longer enough, why inferencing shattered traditional supercomputing data patterns, and why today’s infrastructure decisions must be guided not by legacy conservatism, but by intrinsic flexibility. With characteristic clarity and conviction, Glenn lays out the case for treating adaptability as a first-class citizen in architecture...and why VAST is where he’s chosen to do just that.

Episode 9

The AI Dilemma: Why Federal IT Projects Fail and How to Fix Them

Stacks of federal reports tell countless stories of IT investments gone sideways, yet the stakes have never been higher as artificial intelligence reshapes government. David Hinchman, Director of IT and Cybersecurity at the Government Accountability Office (GAO), joins Shared Everything to dissect why federal technology initiatives often falter and how these invisible fault lines could dangerously widen in the age of AI. From planning pitfalls to hidden infrastructure challenges, Hinchman reveals the critical decisions that determine whether AI becomes government’s greatest tool...or its most costly failure

Episode 10

Lambda's VP of Infrastructure on Building the Aggregated Edge for Sovereign AI

In this episode, Nicole talks with Ken Patchett, VP of Datacenter Infrastructure at Lambda, about how hyperscale AI and sovereign LLMs are redefining datacenter and data management strategies. Ken highlights the challenge of data gravity, emphasizing the critical role of co-locating extensive storage infrastructure alongside ultra-high-density compute to support increasingly data-hungry workloads. He outlines Lambda’s "aggregated edge" model, designed for regional deployment of inference and enterprise workloads, enabling localized data processing and compliance with global sovereignty and privacy regulations. The conversation also addresses how these changes demand adaptive multi-density infrastructure, integrating flexible compute-storage designs that accommodate shifting hardware requirements and evolving regulatory landscapes.

Episode 11

How SK Telecom Built a Sovereign AI Cloud from the GPU Up

In this episode, we explore how SK Telecom, South Korea’s largest wireless carrier and now a major force in AI infrastructure, joined forces with VAST Data to tackle one of the most difficult problems in large-scale computing: building a sovereign AI cloud that doesn’t compromise on speed, security, or scalability. Facing the nation’s mandate to keep AI models, data, and infrastructure fully under domestic control, SK Telecom had to rethink GPU virtualization from the ground up. The result is a platform that delivers near-bare-metal performance, strict multi-tenancy, and instant provisioning, setting a new standard for how sovereign AI infrastructure can be designed and operated.

Episode 12

How Engineers Navigate Data Transition in the Age of AI

In this episode, Nicole speaks with Aaron Chaisson and Blake Golliher of VAST Data about how the company is reframing its mission for the AI era, centering on the idea of becoming the Operating System for AI. Aaron lays out the strategy behind this shift, while Blake—drawing on his deep background in building large-scale data platforms explains how the VAST SyncEngine enables customers to move and manage massive volumes of data across sites, clouds, and AI pipelines in real time. The discussion highlights why the ability to synchronize data at scale is critical for enterprise AI adoption, and how VAST’s approach marries technical architecture with business strategy to help organizations operationalize intelligence in ways traditional storage platforms never could.

Episode 13

Building the Secure AI Factory: Cisco, NVIDIA, and VAST Rewire the Enterprise Datacenter

On this episode, Nicole sits down with Danny McGinniss, VP of Product Management for Cisco Compute, Jacob Liberman, Director of Enterprise Product at NVIDIA, and John Mao, VP of Business Development and Alliances at VAST, to pull apart what it really means when three of the biggest forces in infrastructure line up behind the Cisco Secure AI Factory with NVIDIA, an architecture that brings together Cisco’s compute and networking, NVIDIA AI Data Platform, and VAST Insight Engine. The episode walks through the reimagining the datacenter as an AI factory where security, storage, speed, and data gravity collide to make enterprise AI real.

Episode 14

The Future of Reasoning Models and AI Infrastructure

In this episode, Nicole brings reasoning models to center stage. No longer just text predictors, they now loop, branch, and drag in outside data, which blows open context windows and GPU limits. Alon Horev, CTO of VAST Data, unpacks how this shift strains infrastructure, while Kevin Deierling, SVP of Networking at NVIDIA, explains how NVIDIA Dynamo moves KV caches and workloads across GPUs, networks, and storage to keep agentic workflows moving. Data platforms become an extension of memory, enabling longer chains of thought, real-time agents, and secure, observable data paths. The result is a vivid picture of the AI datacenter as the nervous system for reasoning at scale.

Episode 15

From AlphaGo to Surfer H: The New Frontier of AI Agents for All

In this episode Nicole talks to Laurent Sifre, co-founder and CTO of H Company and former DeepMind scientist behind AlphaGo, AlphaFold, and Chinchilla. They explore how his discoveries in model scaling shaped the design of H Company’s computer-use agents, including the Surfer H platform that learns to navigate software through perception and action instead of APIs. The conversation dives into data infrastructure, distributed KV caching, sovereign compute, and why the future of automation depends on smaller, specialized models that act in the real digital world.

Episode 16

Data Proximity and the Quantum Architecture of Tomorrow

In this episode Nicole talks to Chris Powell, Chief Scientist at SAIC, and Kartik, Chief Scientist at VAST Data, about how the foundations of supercomputing are being rewritten by quantum advances, new architectures, and the collapse of distance between data and compute. Together they explore what happens when data becomes the environment of computation itself, how proximity and randomness define the next frontier, and why the systems of the future will think exactly where the information lives.

Episode 17

From Cloud to Cosmos: Jason Vallery on Building the Next Generation of AI Infrastructure

On today’s episode of the Shared Everything podcast, Nicole talks to Jason Vallery, who just joined VAST after a 13-year career at Microsoft where he helped build the Azure cloud from the ground up. Jason reflects on the early days of object storage and cloud-native computing, when scaling from petabytes to exabytes redefined what infrastructure meant, and explains how lessons from Azure’s hyperscale era now shape his vision for VAST’s role in the AI age. He talks about the convergence of file and object systems, the evolution of AI storage built for thousands of GPUs, and the industry’s pivot from “data gravity” to a world where compute follows power and data must follow compute. Together, they trace how public cloud principles birthed the AI supercomputers of today and how the next wave of disaggregated, multi-cloud “neo clouds” will demand architectures that look a lot like what VAST is building.

Episode 18

When Software Becomes a System: The Architecture Behind VAST 5.4

In this episode Nicole talks to Jeff Denworth, Co-Founder of VAST Data, about the deep architectural shifts behind the 5.4 release, delving into how a new distributed runtime, native vector database, and event-driven compute layer transform VAST from a storage platform into a fully programmable AI operating system. Jeff explains how real-time vector inserts, parallelism without inter-node communication, and disaggregated shared-everything design make it possible to reason over data as it arrives, powering applications from Smart City analytics to trillion-scale AI pipelines.

More from this topic

Learn what VAST can do for you

Sign up for our newsletter and learn more about VAST or request a demo and see for yourself.

* Required field.