At the Milken Institute’s 2025 Global Conference, some finance’s best technical minds gathered to talk AI challenges, from overfitting, black-box explainability, nonlinear model fragility, to the urgent need for better techniques.
Umesh Subramanian (Citadel), Katrina Paglia (Pantera Capital), Daniel Morillo (Freestone Grove Partners), Michael Zeltkevic (Oliver Wyman) and Andy Kreuz (WorldQuant) gathered with WSJ’s Gregory Zuckerman to cut past AI’s hype and dig into its harder limits.
The panel, “Algorithms to Outcomes: AI’s Impact in Finance,” circled around the idea that despite its predictive promises, AI in finance remains stubbornly backward-looking, bound by data history rather than market uncertainty.
Asset managers, hedge funds, quants…everyone’s placing massive bets on their models’ ability to see around corners. But the catch is that AI’s superpower is pattern recognition from historical data, not forecasting unexpected pivots.
And as we’ve all seen this year in particular, markets shift abruptly. macro surprises and geopolitical jolts routinely crush historical assumptions. And when those models misread reality, billions vanish in an instant.

To harness AI fully, finance will have to confront and technically overcome the deep limitations of predictive modeling itself.
Umesh Subramanian from Citadel noted that quantitative methods rely heavily on stable historical inputs and fail technically as soon as those inputs distort or disappear.
Daniel Morillo from Freestone Grove sharpened this, adding that humans draw on context and judgment to detect regime changes that models often miss. The issue is AI’s fundamental difficulty integrating unstructured signals that mark inflection points before they surface in data.
Andy Kreuz of WorldQuant highlighted AI’s most insidious trap, which he explained, is overfitting, or where models become hyper-attuned to noise masquerading as signal.
Financial data, he argues, is densely packed bit deceptively sparse on actual predictiveness, offering endless opportunities for false pattern-matching. Quants mitigate this through cross-validation, regime-aware modeling, and aggressive stress-testing but, Kreuz cautioned, the challenge persists, because markets fluctuate in ways historical patterns don’t match.
Morillo says finance’s toughest predictive problems often demand insight into subtle market shifts and regime changes that nonlinear, deep-learning models routinely miss. These models, while good at recognizing complex patterns, falter under suddenly novel conditions. Kreuz agreed, adding that nonlinear complexity itself can paradoxically degrade future generalizability.
Pantera Capital’s Katrina Paglia describes how this opacity also complicates regulatory scrutiny, especially as regulators and LPs demand transparency around financial decisions. Umesh Subramanian also agreed, noting that predictions aren’t enough, firms have to demonstrate why a model made a particular trade or risk decision.
Increasingly, firms respond by embedding explainability constraints directly into model architectures, deploying interpretability techniques (SHAP, LIME), and designing rigorous validation frameworks that clarify model decisions.
Faced with AI’s clear technical limits, leading firms are shifting focus, not toward pure automation, but strategic human integration.
Citadel’s Subramanian described how discretionary investors use AI to quickly gather, filter, and summarize information, leaving final, nuanced interpretation strictly human.
Morillo added that firms have to intentionally build teams that use AI without sacrificing human insight. It’s about reshaping the analyst role, he says, to emphasize strategic judgment, contextual awareness, and the very human skill of sensing subtle market shifts that algorithms have a hard time detecting.
While pure prediction is elusive, firms are blending probabilistic forecasting and scenario analysis, techniques designed to quantify uncertainty (versus dismiss it). Bayesian modeling and reinforcement learning also came clearly into focus during the panel.
Zeltkevic says digital twins and agent-based simulations, once niche academic tools, are now viable for realistically modeling unprecedented market scenarios, offering a technically sophisticated pathway that goes a step beyond historical reliance.
As predictive models become ubiquitous, the ability to extract alpha from them diminishes. The competitive edge, therefore, won’t be in who adopts the latest algorithms, but who is integrating technical sophistication with expert human judgment.
Finance, Subramanian insisted, will increasingly reward firms that fuse predictive rigor, deep contextual insight, and adaptive flexibility into one cohesive system.
None of the panels were arguing that near-future finance will abandon predictive ambition but they are suggesting it will reorient toward more robust and flexible AI frameworks.
Panelists also pointed toward greater reliance on previously inaccessible data (unstructured, alternative, even synthetic) to improve forward-looking accuracy. They also foresee deeper integration of behavioral finance principles into predictive models, blurring the lines between human intuition and AI analytics.
Yet, even in this vision, Subramanian cautioned that iif predictive AI becomes ubiquitous and reliably accurate, it risks commoditizing alpha itself.
The takeaway from the panel from a broader view is that yes, AI delivers unprecedented efficiency, analysis, and near-term clarity. But predicting markets keeps meaning navigating profound uncertainty.