Part 2 of 2: The Three Failures That Will Define Who Survives AI
This is the second in a two-part series. Part 1 identified the three failure modes (Blind Velocity, the Replacement Trap, and the Thinking Gap) that are turning enterprise AI investments into the most expensive technology failure in corporate history. Part 2 examines why these failures keep repeating and what the path forward looks like.
The Fourth Pillar: Why Organizations Keep Making These Mistakes
If these failures are so predictable, why do smart organizations keep walking into them? The answer is structural. And it has been there the whole time.
Most organizations run transformations through three pillars: people, process, and technology. This framework has been the foundation of enterprise strategy for a generation. Data gets lumped under technology, treated as a plumbing issue, something the infrastructure team handles. When budgets are allocated, data competes with application development, cloud migration, and cybersecurity for the same technology dollar. When organizational accountability is assigned, data typically reports up through the CIO in terms that do not connect to business outcomes.
This classification is the root of the problem. Data is not a technology concern. It is the Fourth Pillar, independent of and foundational to the other three.
What the Misclassification Actually Does
When data gets lumped under technology, three things happen, and each maps directly to one of the failures described in Part 1.
First, nobody with business context owns data quality. The CIO owns infrastructure. The CDO, if one exists, owns governance policy. But nobody owns the question that actually matters in the age of AI: Is this data ready to be reasoned over by a machine that will make decisions with it? That is not a technology question. It is a business strategy question. When it goes unasked, you get Blind Velocity.
Second, data professionals are treated as technicians rather than strategists. When data sits under technology, the people who work with it are seen as support staff, task doers by organizational design. They build pipelines, write ETL jobs, respond to tickets. They are rarely invited into strategic conversations about what the data should enable. The org chart itself creates the Replacement Trap, by design.
Third, data investment gets evaluated on technical criteria rather than decision quality. When the board asks “Is our data infrastructure adequate?”, the answer comes back in uptime percentages and query performance. These metrics tell you the plumbing works. They do not tell you whether what is flowing through the pipes is trustworthy. Without that distinction, organizations lack the vocabulary to even identify the Thinking Gap, let alone measure it.
What This Looks Like When It Stalls a Project
Consider a financial services firm that built a natural language generation tool capable of automating 50% of their 80-page analyst reports. Six-month pilot. The analysts who used it loved it. Accuracy was strong. ROI was clear. By every technical measure, the project was a success. It went live a year late.
The operations team responsible for integrating the tool into production workflows had a 9-month backlog. They were not blocking it out of hostility. They simply had no incentive to prioritize it. Their performance metrics rewarded operational stability, not analyst productivity. There were no shared KPIs between the two teams. Nobody owned the question: is this organization ready to absorb this capability? A year of foregone analyst productivity on a tool with proven ROI, because the organizational foundation was nobody’s job.
This is the Fourth Pillar failure in its clearest form. The technology worked. The users validated it. The business case was proven. But the organizational foundation (the cross-functional alignment, the shared accountability, the change management infrastructure) lived in the gap between pillars, unowned and unfunded. BCG’s survey of 1,000 CxOs across 59 countries found that 70% of AI challenges originate from people and process, not technology or algorithms. This project is what that statistic looks like in practice.
Elevating data to pillar status means treating it the way organizations treat people, process, and technology: its own strategy, its own investment thesis, its own accountability structure, and its own seat at the leadership table. A separate Fortune 500 financial services company we worked with made exactly that shift, moving data governance from an IT function to a board-level concern with its own roadmap and executive accountability. The downstream effect was not just cleaner data. It was an organization that could actually respond to a new AI use case without stopping to rebuild its foundation from scratch. That is the compounding return that pillar-level investment creates, and the one that organizations treating data as plumbing will never see.
Foundation First: The Path Forward
The three failure modes. The Fourth Pillar. These are not separate problems requiring separate solutions. They are symptoms of the same underlying condition: organizations deploying AI on data that was never architected to support it, managed by people who were never developed to question it, measured by metrics that cannot detect when things go wrong.
Foundation First is the connective principle. Your data architecture is not a technical concern to be handled after the AI strategy is set. It is the AI strategy.
Foundation First says: before you worry about whether AI is amplifying your workforce, make sure it has something trustworthy to work with. Before you worry about whether your people are task doers or problem solvers, make sure the data layer is not forcing task-doer patterns by being too unreliable to support real analysis. Before you evaluate whether AI is eroding critical thinking, build a foundation of trusted data that allows critical thinking to flourish.
Foundation First, in Practice
One of America’s largest steel manufacturers grew from $20 billion to $35 billion in revenue during a seven-year engagement we had with them. We built the data infrastructure that made that growth possible, streamlining their conversion capacity from 10+ dedicated internal resources handling a single division down to a team of 4 running 3 to 6 concurrent divisions simultaneously. The foundation work was not a line item in the P&L. It was what made the scale achievable.
The clearest picture of what Foundation First looks like for people comes from a global technology company with more than 200,000 employees across six continents. When we began, the people data was scattered across dozens of incompatible systems. Fundamental workforce questions took days to answer, and the answers varied depending on which system you asked.
The task-doer approach would have been to migrate the data, build new reports, and declare victory. Instead, we focused on the foundation: unifying more than 15 HR data sources into a single governed data layer using Snowflake, dbt, and Tableau, standardizing definitions so that core metrics meant the same thing to finance, recruiting, and operations, and systematically cleaning up years of accumulated inconsistencies.
The result was workforce data that could actually be trusted, for the first time. Questions that previously took days to answer, and still came back with conflicting numbers, became answerable in real time. Cross-functional alignment between HR, Finance, and Operations became possible because everyone was finally working from the same numbers. And on that foundation, the organization deployed predictive attrition modeling that enabled proactive retention, a capability that had been on the roadmap for years but had no reliable training data to stand on until the foundation work was done. You cannot build meaningful intelligence on top of data nobody believes. The foundation work made the advanced work possible.
Foundation First for People
Foundation First does not stop at infrastructure. It extends to how organizations develop their people.
If the Replacement Trap is the failure mode, the path forward is an orientation toward work that treats AI as leverage, something to cultivate, measure, and reward. Problem solvers see a different equation than task doers. Where a task doer sees a threat (this tool does what I do, but cheaper), a problem solver sees the ability to explore ten hypotheses instead of one, to analyze three years of data instead of three months, to surface patterns that were always there but never accessible at human speed.
This is not about being “good with AI tools.” It is about knowing what questions to ask and why those questions matter. It is the difference between asking AI to generate a report and asking AI to surface why attrition in your Southeast Asia operations spiked 40% in Q3 despite above-market compensation.
Anthropic’s December 2025 study of its own engineering workforce, based on surveys of 132 engineers and 200,000 internal usage transcripts, found that AI was making software developers more versatile, not less relevant. Engineers took on increasingly complex tasks, worked across broader technical domains, and accelerated their own learning. The problem solvers got more capable. That is what it looks like when the foundation is right and the people are built to use it.
Where to Start This Quarter
If most of your workforce has been conditioned as task doers by decades of rigid systems, you cannot fire them and hire problem solvers. That would not work, because the problem solvers you would hire would be shaped by the same systems and incentive structures. The transition has to be structural.
Identify your Context Translators and make them visible. Every organization has people who bridge domain knowledge and data logic, who know not just what the system does but why the business built it that way. They are typically buried in mid-level roles. Find them. Give them a title that reflects their strategic value. Make them the nucleus of your AI transition teams.
Restructure at least one team around decisions rather than tasks. Pick one functional team (workforce planning, supply chain analytics, financial reporting) and redesign how you measure them. Not by output volume, but by the quality of decisions their work enables. This is not a company-wide transformation. It is a proof point, one that gives your conditioned task doers a visible model to move toward, rather than an abstract concept to puzzle over.
Pair AI deployment with data literacy, not just tool training. Most corporate AI training teaches tool use: how to write a prompt, how to read a dashboard. That is not enough. People also need to evaluate whether an AI output makes sense, which requires understanding where the data came from and what assumptions it carries. That is data literacy. It is what separates someone who can work with AI from someone who is just doing old tasks faster with new tools.
The Reinforcing Cycle
These three failures do not operate independently. They feed each other, and understanding how is what separates organizations that address symptoms from ones that fix the system.
Blind Velocity creates the conditions for the Replacement Trap. When the data layer is unreliable, the only safe role is task execution, because the data does not support the kind of deeper analysis that would allow people to exercise real judgment. The Replacement Trap enables the Thinking Gap. When people are conditioned to follow workflows rather than exercise judgment, they defer to AI outputs uncritically, because questioning outputs was never part of the job. And the Thinking Gap accelerates Blind Velocity. When no one is questioning AI outputs, no one catches the data quality issues corrupting the AI’s reasoning, and the cycle tightens.
It is a reinforcing cycle. Breaking it at any single point provides temporary relief. Breaking it at the root, by treating data as the Fourth Pillar with its own strategy, investment, and accountability, is what makes the fix durable.
Three Questions to Ask This Quarter
Every organization’s situation is different. But there are three questions that help leaders locate where they actually stand rather than where they assume they stand.
“If we turned off our AI tools tomorrow, could our people still make good decisions with the data we have?” This exposes both the Thinking Gap and Blind Velocity simultaneously. If the answer is no because people have become dependent on AI outputs, you have a Thinking Gap problem. If the answer is no because the data is too fragmented for anyone to interpret without AI doing the work, you have a Blind Velocity problem. If it is both, you have a compounding problem, and you need to start with the data layer.
“When we evaluate our data professionals, are we measuring task completion or decision quality?” This exposes the Replacement Trap at the organizational level. If your data team’s performance reviews emphasize pipeline uptime, ticket closure rates, and migration completion percentages, you have built a system that rewards task execution and penalizes the kind of exploratory, judgment-intensive work that AI cannot replicate.
“Does data have its own strategy in our organization, or is it a line item under technology?” This exposes the Fourth Pillar gap. If your data strategy is a subsection of your IT strategy, if data investment competes with application development and cloud migration for the same budget, data is being treated as plumbing. And plumbing does not get the investment, the leadership attention, or the organizational accountability required to serve as the foundation for AI.
Answer these honestly. They will tell you more about where you actually stand than any assessment framework or maturity model.
The Uncomfortable Truth
My board member was right. It comes down to identity, not just for individuals, but for organizations.
Julien Bek, a partner at Sequoia Capital, published an essay this month that arrived at the same conclusion from a completely different direction. He called it “Services: The New Software.” His thesis: if you sell the tool, you are in a race against the model. But if you sell the work, every improvement in the model makes your service faster, cheaper, and harder to compete with. The task-executing organization sells the tool. The problem-solving organization sells the work. That is the same identity distinction, applied to a business model.
But identity is not fixed. And the organizations that change fastest, from task-executing enterprises to problem-solving ones, are building structural advantages that late movers will not be able to buy their way into. Not because the AI tools will become unavailable. They will be commoditized. What will not be commoditized is what sits underneath them: trusted data, people who know how to interrogate it, a culture that rewards decision quality over decision speed. Those take years to build. They cannot be licensed.
The compounding math is unforgiving. The data foundation work that takes 18 months to complete also takes 18 months to start. Every quarter an organization delays is a quarter a competitor spent building infrastructure that compounds. The gap between organizations that started this work in 2025 and those that start in 2027 will not close with a larger 2028 budget.
Research consistently finds that fewer than a quarter of companies currently generate tangible value from AI investments. The rest are investing heavily and seeing little return. The difference is not luck, and it is not technology. It is the willingness to do foundational work that does not generate press releases but makes everything else possible.
The question every CEO should be able to answer before their next board presentation on AI is not “How much are we spending?” It is: What is our AI actually reasoning over, and do we trust it?
For most organizations right now, the honest answer is: not yet. But the window to fix that is still open. The organizations that use it will look back on this moment as when they separated. The ones that do not will spend the following decade wondering why their AI investments kept underperforming, and funding consultants to tell them what this series already did.










