Why the answer to AI’s implementation crisis isn’t less AI—it’s better foundations.
A response to Tom Davenport’s “Slowing Down the AI Train” (February 20, 2026), https://tdavenport.substack.com/p/slowing-down-the-ai-train
Tom Davenport has earned the right to change his mind. With at least seven books on AI, a career spanning analytics from its earliest days, and the credibility that comes from 167,000 scholarly citations, when he says he’s “ready to put on the brakes,” people listen. His latest Substack piece, “Slowing Down the AI Train,” is characteristically honest and well-reasoned.
But I think he’s diagnosing the wrong problem.
Davenport is right that AI’s current trajectory is producing real harm—cybersecurity threats, job displacement anxiety, eroding public trust. The headlines he cites are genuinely alarming. Where I part company is his prescription: slow down the research, restrict what agentic systems can do, and hope that regulation catches up. That approach treats the symptoms while ignoring the underlying condition, which is that most organizations attempting AI don’t have the foundations to implement it responsibly.
The problem isn’t that AI is too fast. It’s that organizations are too unprepared.
The Failure Rate Tells a Different Story
Davenport has documented this extensively. Only about half of respondents in his global survey say they’re getting “a great deal of value” from AI. McKinsey’s 2025 State of AI report found that 88% of companies are using AI in at least one function, but only 39% report any enterprise-level EBIT impact. BCG’s data is even starker: 60% of organizations generate no material value from their AI investments, and only 5% create substantial value at scale.
These aren’t numbers that scream “AI is too powerful and moving too fast.” They scream “organizations are deploying AI on broken foundations and wondering why it doesn’t work.”
Harvard Business School and MIT research consistently show that 80–95% of AI pilots fail to reach production. But when you study why they fail, the answers are remarkably consistent: poor data quality, lack of organizational readiness, no change management, undefined success metrics, and workflows that were never redesigned for AI. As analyst Bertrand Duperrin concluded in his synthesis of the McKinsey and BCG reports “AI does not lack capabilities but it is organizations that lack the structure to absorb them.”
There’s a harder truth underneath those statistics: most organizations didn’t just stumble into AI unpreparedness. They underinvested in data governance, change management, and process modernization for years—sometimes decades—because those capabilities never had a burning platform to justify the spend. AI is that burning platform now. The failures we’re seeing aren’t AI failures. They’re the accumulated cost of deferred investment in fundamentals that finally has a price tag attached to it. Blaming the technology for exposing those gaps is like blaming the stress test for revealing the cracks in the foundation.
Slowing down AI research won’t fix any of those problems. It will just leave organizations equally unprepared for slightly less capable systems.
Davenport’s Own Advice Was Better
Here’s what’s ironic: Davenport’s earlier work contains the blueprint for exactly the approach that would address his current concerns. In “The AI Advantage,” he argued against moonshot thinking and for practical, incremental implementation. In “Working with AI,” he made the case that AI augments human capabilities rather than replacing them. In “All-In on AI,” he showed that the companies generating real value from AI are the ones that invest in organizational change alongside technology.
His prescription in those books was essentially: focus on implementation discipline, not the technology itself. Don’t chase the flashiest model; build the organizational muscle to deploy what you have. That advice was right then, and it’s even more right now.
The fact that Davenport now wants to pump the brakes on research suggests that the problem has gotten worse—not because the technology is more dangerous, but because the gap between AI capability and organizational readiness has widened. The answer isn’t to slow the technology down to match organizational maturity. It’s to bring organizational maturity up to match the technology that’s already available. If you don’t do this, then you risk getting left behind by more nimble competitors and/or deploying AI on shaky foundations which comes with serious risks that get more impactful by the day.
The Regulation Argument Undermines Itself
Davenport is admirably honest about the difficulty of slowing AI through regulation. Congress is dysfunctional. The current administration favors deregulation. State-by-state approaches create a “nutty” patchwork. AI vendors are spending tens of millions to fight regulation. Existing laws like New York City’s Local Law 144 on AI in hiring haven’t been enforced.
He acknowledges all of this and then concludes that slowing down is “desirable or even necessary” anyway. But desirable and achievable are different things. And when every mechanism you propose for slowing down has already demonstrably failed, the pragmatist in you should be looking for a different approach.
I’d argue the different approach is staring us in the face: instead of trying to regulate AI development (which, as Peter Jansen noted in the article’s comments, moves at the speed of the electron while policy moves at the speed of election cycles), we should focus on governing AI deployment. That’s something AI tech providers and enterprise consumers can control today without waiting for Congress, the UN, or international treaties.
What Actually Works: The Foundation-First Alternative
I spend my days working with organizations trying to get value from AI. The pattern is strikingly consistent. Companies that invest in data readiness, organizational alignment, and systematic deployment methodology before touching AI tools succeed at dramatically higher rates than those that don’t.
The organizations trapped in what the industry calls “pilot purgatory”—and there are many—almost always share the same characteristics: they bought AI tools before assessing whether their data could support them; they launched pilots without defining what success looked like; they skipped change management because it was slower than building a demo; and they had no systematic way to evaluate which use cases were worth pursuing in the first place.
None of those failures required more advanced AI. They required better preparation for the AI they already had. This is exactly what Davenport argued in his earlier work, and it’s what McKinsey, BCG, IBM, and every serious business leader is now confirming: the organizations seeing real value from AI are the ones that redesign workflows, invest in data infrastructure, define clear metrics, and build organizational capacity for change.
Stopping AI research doesn’t help a manufacturer whose production data is sitting in disconnected spreadsheets. It doesn’t help a financial services firm whose KYC process is undocumented tribal knowledge. It doesn’t help a PE firm trying to create value across a portfolio of companies at different maturity levels. What helps those organizations is a structured methodology for assessing readiness, identifying high-value use cases, and deploying AI with the organizational support to make it stick.
This is especially true for agentic AI—the specific technology Davenport fears most. An autonomous agent deployed on clean, governed data with well-defined workflows and clear guardrails is a productivity engine. The same agent deployed on fragmented data, undocumented processes, and no governance framework makes mistakes faster, at scale, with less human oversight to catch them. The foundation-first approach is both a better and responsible way to implement AI agents.
Let’s Be Honest About Agents and Jobs
Davenport raises valid concerns about agentic AI—systems that can take autonomous actions, set goals, and generate their own code. Demis Hassabis’s warnings about unintended actions deserve serious attention. But I want to address the workforce question directly, because I think the AI industry has been dishonest about it, and that dishonesty is part of what’s fueling the backlash Davenport describes.
AI agents will change the ratio of people to output. I just modeled a process for a client where agents handling a portion of the work would deliver significantly more throughput with fewer people. That’s not a hypothetical. That’s the math. And anyone will run the same numbers and reach the same conclusion.
So let’s stop pretending this is only about “augmentation.” In some cases, agents mean the same team handles dramatically more volume—customer assessments that took three days now take three hours, compliance checks that required a dedicated analyst now run in the background, supply chain anomalies that went unnoticed for weeks now surface in real time. The team doesn’t shrink; it becomes vastly more capable. In other cases, the throughput increase means the organization needs fewer people to achieve the same or better results. Both of those scenarios are real, and they often coexist within the same deployment.
Here’s what’s also true: the humans who remain in either scenario are doing fundamentally different work. They’re making judgment calls, handling exceptions, managing relationships, and focusing on strategic decisions—the work that’s hardest to automate and most valuable to the business. The repetitive, data-intensive grind that burned people out and created errors gets absorbed by agents. Communication and decisions led by people increase as throughput accelerates.
Acknowledging that AI changes workforce composition is not the same as arguing we should slow down AI research. It’s an argument for deploying AI thoughtfully—with workforce transition planning built in from day one. Companies that approach AI deployment with clear-eyed honesty about headcount implications can manage those transitions responsibly: through retraining, redeployment to higher-value work, attrition-based reduction, or reinvesting productivity gains into growth that creates new roles. Companies that deploy AI recklessly—or worse, companies that get blindsided because they spent years being told to wait—are the ones that create the painful layoff headlines and fuel the public backlash Davenport is reacting to.
The answer to workforce disruption isn’t to freeze the technology. It’s to make workforce planning a non-negotiable part of every AI implementation. That requires honesty, not optimism—and it requires organizations to start preparing now, not after some future regulation forces them to.
The Analytical AI Insight He Should Have Explored Further
Buried in Davenport’s piece is one of the most important observations he makes: companies report getting more value from analytical AI than generative AI “by a substantial margin.” This is exactly right, and it deserves far more attention than a single sentence.
Analytical AI—predictive maintenance, demand forecasting, risk scoring, customer segmentation—is where the most proven enterprise value sits. McKinsey estimates AI’s largest economic impact pools are in customer operations, marketing and sales, software engineering, and R&D, contributing to $2.6–$4.4 trillion in potential annual impact. These are analytical applications, not generative AI grabbing headlines.
If Davenport is worried about AI moving too fast, the strategic response isn’t to slow everything down. It’s to redirect organizational energy toward the AI applications that are already generating proven returns and away from the speculative frontiers that generate impressive demos and terrible ROI. That’s not a technology problem—it’s a strategy and governance problem. And it’s solvable right now, with existing tools, at every company that’s willing to do the foundational work.
The Real Danger of Slowing Down
There’s a practical consequence to the “slow down” narrative that Davenport doesn’t address: it gives struggling organizations an excuse to stop trying. If the message from one of AI’s most respected voices is that the technology itself is the problem, then every COO who just watched a $2 million AI pilot fail has permission to blame the technology rather than examine whether their data was ready, their teams were prepared, or their use case selection was sound.
That’s the worst possible outcome for the companies that most need AI to compete. The mid-market manufacturers under PE pressure to show value. The healthcare organizations that could genuinely improve patient outcomes with AI-assisted diagnostics. The financial services firms losing market share to digitally native competitors. These organizations don’t need less AI—they need better-implemented AI.
Meanwhile, the companies that Davenport profiled in “All-In on AI”—the high performers who invested in organizational change alongside technology—are accelerating. BCG’s research shows a widening gap between AI leaders and laggards, a “winners-take-most” dynamic that punishes hesitation. Telling the laggards to slow down while the leaders accelerate isn’t a safety measure. It’s a competitive death sentence.
A Pragmatist’s Alternative
Davenport opens and closes his piece by calling himself a pragmatist. So let me offer a pragmatist’s alternative to slowing down the train:
First, accept that you cannot regulate AI development into moving slower. The geopolitical dynamics, the economic incentives, and the decentralized nature of AI research make this impossible, as Davenport himself concedes. Stop spending political and institutional energy on a goal that is acknowledged to be unachievable. There’s never been a government in the history of earth that’s moved as fast as AI is moving now.
Second, focus organizational energy on what you can control: the quality of AI deployment. Invest in data readiness. Build change management into every AI initiative from day one. Use systematic frameworks to select the use cases with the highest likelihood of success, not the most impressive demos. Measure outcomes relentlessly and kill projects that don’t deliver.
Third, treat governance as a deployment discipline, not a research restriction. Organizations don’t need to wait for Congress to establish internal AI governance, define acceptable use policies, implement human-in-the-loop requirements for high-stakes decisions, and create accountability structures for AI outcomes. This is within every organization’s power today.
Fourth, be honest about workforce impact and plan for it. The “jobs” conversation is not going away, and sugarcoating it erodes trust with exactly the people—workers and executives alike—who need to engage with AI productively. Organizations that build workforce transition planning into their AI strategy from the start will manage the change. Organizations that keep their staff’s critical thinking and institutional knowledge muscles engaged will keep their edge. Organizations that pretend there’s no change to manage will create the crisis Davenport fears.
And fifth, redirect the conversation from whether AI should exist to how AI should be deployed. That’s the conversation where organizations, regulators, and the public can actually make progress—and it’s the conversation that the 80–95% failure rate is begging us to have.
Tom Davenport is right to be alarmed. But the alarm should point us toward better implementation, not less innovation. The AI train isn’t going to slow down. The question is whether we’ll build tracks that can handle the speed—or just stand at the platform wishing it would stop.



























