All articles

The Agentic Trough Is Coming: Build Now or Get Stuck in It

The Agentic Trough Is Coming: Build Now or Get Stuck in It
Most organizations will spend on agentic AI and get little back. Foundation work in data, processes, and people separates winners from those who stall.
Steve Novak
Steve
Novak
Vice President
View bio

Most organizations are about to spend significant sums on agentic AI and get very little in return. Not because the technology fails. Because what needs to come before the technology was never built.

That gap between what agents can do in a demo and what they can do inside a real enterprise is the agentic trough. The trough occurs when deployment outpaces preparation. It is not a technology failure. Hype peaks, enterprise results disappoint, and the organizations that did not do the preparation work spend the next two years wondering what went wrong. Generative AI followed this exact arc. Agentic AI will too.

The difference this time is that the stakes are higher. Agents do more than produce outputs. They act. A badly deployed agent running on poor data and broken processes does not slow you down. It accelerates you in the wrong direction.

Hype cycle diagram showing the Agentic Trough where unprepared organizations stall and prepared organizations emerge with the foundation built.

The question worth asking now, before the trough arrives, is what separates the organizations that will come through it with working systems from the ones that stall.

The answer is not better models. It is the foundation.

Agents Do Not Fix Broken Processes. They Execute Them Faster.

BCG surveyed 1,000 CxOs across 59 countries and found that organizations getting real returns from AI allocate 70% of their investment to people and processes, 20% to technology and architecture, and 10% to algorithms. Most organizations invert this ratio. McKinsey's 2025 State of AI report shows the result: 88% of companies are using AI in at least one function, but only 39% report any enterprise-level EBIT impact.

The investment is flowing. The organizational readiness is not keeping pace.

An autonomous agent requires three things to work in production: clean, governed data so its outputs are trustworthy; well-defined processes so it knows what to do and when; and an organization ready to absorb it. This means roles are redefined, incentives are aligned, and leadership is committed to acting on what the agents produce.

Miss one and the deployment stalls. Miss the data, and agents produce outputs nobody trusts. Miss the process, and you have automated a broken workflow faster, and that is acceleration in the wrong direction. Miss the organizational piece, and a technically working agent sits unused because the people around it were not ready to change.

I evaluate readiness across nine dimensions before any AI deployment. Most organizations do serious due diligence on the technical ones: data quality, governance, and infrastructure. Those matter. But the gaps that kill deployments are almost never technical. They are in the operating model nobody redesigned before agents started making decisions, the measurement framework nobody built, so there is no way to defend the investment six months in, and the culture question nobody asked: are the people who need to change behavior actually willing to? An agent that works technically but sits unused is still a failed deployment. The architecture review will not catch that.

The technology, the how, comes last. It only works when the what, the why, and the who have been answered first.

What It Looks Like When the Foundation Is Right

We spent 18 months building a workforce intelligence platform for a global technology company. 200,000-plus employees, data scattered across dozens of HR systems, a Workday migration underway, changing the underlying data structures as we build.

We started with decisions, not technology. What does the CHRO need to see every morning? What do recruiters need daily to move faster? Who will actually change behavior based on what we build?

On the data side, we built a semantic layer in dbt that gave every metric one definition: "new hire," "time-to-fill," "attrition." One definition each, governed and version-controlled. Simple in concept. Months of alignment work in practice, because every business unit had its own definition and its own reasons for it.

On the organizational side: weekly feedback loops with end users from month one. Role-based permissions designed with the people who would use them. Incremental delivery so teams got real value before the 18-month mark, not after a big-bang launch nobody had seen before go-live.

The result was 30-plus Tableau dashboards built around how recruiters and executives actually make decisions. Over 10,000 views per month. Analyst time on data reconciliation is down 60%. But the metric that matters most cannot be measured. Leadership stopped asking for custom reports and started trusting the platform.

That shift from "can someone pull this?" to "let me check the dashboard" is the clearest signal that a foundation is working. I have seen it happen twice in 26 years of data work. It only happens when the organizational readiness runs in parallel with the data work, not after it.

Phase two built on that trust. ML pipelines replaced legacy forecasting code, and run time dropped from days to minutes. Natural language search. Automated insights delivered by role and function. Today, agents on that platform deliver data tailored to the person asking, more accurate and more timely, based on who they are, what they need, and what they are permitted to see.

The agents work because the foundation is solid. Not because of the model or the orchestration framework. Because of the 18 months that came before.

What It Looks Like When the Data Is Ready, but the Organization Is Not

A financial services firm built a natural language generation tool that automated 50% of the work required to produce 80-page analyst reports. Six-month pilot. Analysts loved it. Strong accuracy. Clear ROI on paper. The technology was ready by month six.

The organization sat on it for nearly a year.

Why? The operations team had a nine-month integration backlog, and nobody had asked them before development began whether they had capacity. That single omission turned a technical success into a production failure.

But the backlog was a symptom. Three things went wrong underneath it.

The pilot ran on curated, clean data. The production environment lacked the data governance required to support the tool at scale. Nobody assessed that gap before celebrating the pilot's success. The metrics said, "Ready." The production environment was not.

The analysts who loved the tool were problem solvers, and they saw it as leverage. The operations team saw it as a threat to their workflow stability. Nobody invested in helping ops see the opportunity rather than the disruption. Two teams, same tool, completely opposite incentives.

And leadership accepted the pilot results without asking whether the organizational foundation existed to support production deployment. The metrics said "success." Nobody asked: success at what scale, under what conditions, with whose cooperation?

They used AI to do the same thing faster. The firms that win will use it to do something they could not do before. That requires a different kind of preparation. And leadership almost always blames the technology rather than asking whether the organizational foundation was ever in place.

The Economics Make the Case

We modeled a common high-volume enterprise process to put a number on the difference: 15 people handling eight sequential steps, fully loaded at $110K per person. $1.65M per year.

The same process with agents deployed on a weak foundation, with poor data quality, processes that have not been redesigned, and no organizational alignment, runs $2.65M per year. Sixty percent more than manual. The agents generate output. The output is not trusted. Humans review everything. You are paying for the technology plus a supervision layer that did not exist before.

The same process with agents on a strong foundation, clean data, redesigned workflows, and an aligned team runs $1.2M per year. Twenty-seven percent less than manual, with dramatically higher throughput. Fewer people in the loop because the agents are reliable. The people who remain handle judgment, exceptions, and strategy.

Same technology. Same agents. Different organizational readiness. A $1.45M annual gap on a single process.

Scale that across an enterprise running dozens of high-volume processes and the number becomes a board conversation.

What to Do Before the Trough Hits

Three things worth doing this quarter.

First, quantify the cost of your current foundation gaps annually. Manual reconciliation, analyst time spent cleaning instead of analyzing, AI initiatives that stall before production, decisions made on data nobody fully trusts. These have a dollar value. Most organizations know the gaps exist; few have quantified them. That number changes the conversation from "should we invest in the foundation" to "can we afford not to?" It also creates the organizational permission to act before the trough hits rather than after.

Second, ask who needs to change behavior once the system is live, and whether they are actually ready. Are their incentives aligned with the new model? Has their workflow been redesigned? Do they understand what is changing and why? If the answer to any of those is no, the deployment will stall regardless of how well the technology works. That question gets asked on day one, or it does not get asked at all.

Third, be honest about where you actually are across all nine readiness dimensions, not just the data ones. The gaps that matter most are almost never in the architecture. They are in the operating model, the measurement framework, and the culture. A technical audit will not surface them.

The agentic trough will not last forever. The organizations that built their foundation will come through it with a structural advantage: better data, aligned teams, and deployment patterns that actually work at scale. Those who waited will still be starting that work while their competitors are scaling.

Build now. Not when the agents are ready. Now.

If you are pressure-testing your own readiness before agentic AI arrives, the Four-Gate Decision Framework routes any use case to the right solution tier, or tells you it does not need AI at all. The ROI Drain Calculator quantifies what your current foundation costs you annually.

Other articles

Foundation First: The Root Cause and the Path Forward

Foundation First: The Root Cause and the Path Forward

Data Governance
Best Practices
Data Value Realization
Part 2 of The Three Failures That Will Define Who Survives AI. Why treating data as a technology concern instead of its own strategic pillar is the root cause, and what Foundation First looks like in practice.
The Three Failures That Will Define Who Survives AI

The Three Failures That Will Define Who Survives AI

Data Governance
Best Practices
Data Value Realization
Over 80% of AI projects fail to reach production. The problem is not the technology. Three predictable failure modes are turning enterprise AI into the most expensive technology failure in corporate history.
The Model Isn’t the Problem

The Model Isn’t the Problem

Data Governance
Best Practices
Healthcare
Healthcare AI pilots stall before reaching production. The model is rarely the issue. The gap between training data and production data is what breaks deployment.
Client testimonial
The Definian team was great to work with. Professional, accommodating, organized, knowledgeable ... We could not have been as successful without you.
Senior Manager | Top Four Global Consulting Firm

Partners & Certifications

Ready to unleash the value in your data?