All articles

You Can’t Manage What You Haven’t Named

You Can’t Manage What You Haven’t Named
Data quality tells you if your data is clean today. The Organizational Malleability Score tells you whether your organization can keep it trusted as the business changes. Most leaders treat these as the same question. They are not.
Kunal Sharma
Kunal
Sharma
Vice President, Data Management
View bio

A source system gets updated on a Tuesday. The team that made the change tested it, documented it, and sent the notification they were supposed to send. They did everything right.

By Thursday, a regional VP will present revenue numbers to the executive team that are 12% lower than what finance shows. Nobody knows which is right. The meeting stops being about revenue. It becomes about the data. An hour of executive time is gone. The VP leaves the room shaken, not because the number was wrong, as that happens, but because she couldn’t explain why. And she has been in that room before.

A new analyst on the finance team spends the next week trying to reconcile the gap. He is three months into the job. He pulls the lineage documentation and finds a field called net_revenue_adjusted and another called net_revenue_recognized, with no written explanation of the difference. He messages four people. Two do not respond. One says it depends on the period. The fourth, the person who actually knows, left the company eight months ago. By the following Wednesday, after three conversations and a review of a Confluence page that nobody had updated since 2021, he has an answer. The VP has already moved on.

Nobody in that story failed. The team that made the change followed the process. The VP prepared. The analyst worked hard. The problem was not competence. Nobody had built a way to measure how exposed the organization was before that Tuesday arrived.

That is the problem the Organizational Malleability Score is built to solve.

Data quality metrics tell you whether your data is clean today. The OMS tells you whether your organization can keep data trusted as the business changes. Most leaders treat these as the same question. They are not. The first is a snapshot. The second is a capability. In fifteen-plus years of working through transformations across financial services, healthcare, manufacturing, and distribution, the organizations that struggled were not the ones with dirty data. They were the ones with no way to see the exposure coming.

Think about what happened when engineering teams started measuring deployment frequency and mean time to recovery. The conversation changed. It stopped being about whether the team was good and became about the number. Cybersecurity did the same with threat exposure scoring. Boards stopped asking whether security mattered and started asking where the organization sat on the scale. The measurement did not just describe the problem. It made the problem governable.

OMS is that instrument for organizational data capability. It is not a governance checklist or a data quality dashboard. It is a score from 0 to 100 that tells leadership how well the organization can keep its data trusted as the business evolves. One organization I worked with, a large infrastructure operator running a fifteen-year-old legacy platform, entered a major migration at an estimated 18. They exited at 64. The architecture did not move that number. The investments in how their people owned, documented, and certified data did.

The score measures three things.

What actually happens when something changes upstream, how long recovery takes, and how many breaks along the way. Whether the people relying on the data understand it, not whether a catalog exists, but whether the assets in it carry enough context for someone to act without calling anyone. Documentation written before a business restructure is not metadata. It is archaeology. It also measures whether the systems consuming the data can trust what they are reading, which, in an organization running AI at scale, determines whether systems produce reliable decisions or amplify errors across processes before anyone notices. This is not a separate AI problem. It is a malleability problem with a different label.

Seven signals. Three questions. One score.

Here is what it looks like when an organization knows its number.

The quarterly data review used to be defensive. Someone asks why a metric changed. The next forty minutes are spent reconstructing the logic, finding the owner, and determining whether the shift was real or an artifact. The meeting produces activity but not decisions. With an OMS baseline, that same meeting opens differently. The score is on the table. The team knows which signals moved. When a metric shifts, the named owner comes prepared. The forty minutes become ten. What remains is decision-making.

Consider the migration. Instead of discovering six months later that four critical reports have no documented owner and two field definitions have not been updated since the previous system went live, the team knows before the first sprint begins. The exposure is visible. The remediation is sequenced. The Tuesday that used to be a fire drill becomes a controlled transition.

Brittle organizations don’t fail all at once. They fail one Tuesday at a time.

Most organizations do not know their score, not because they have not tried to manage data quality, as most have, but because data quality and data malleability are different measurements. Nobody has asked the second question in a way that produces a number. Organizations at the low end consistently face transformation timelines two to three times longer than those that do not. That gap appears in program overruns, delayed AI deployments, and analyst hours spent acting as a translation layer for data that should speak for itself.

That analyst’s week is not exceptional. It is the operating condition of most data teams, measured in hours that never appear on a dashboard and never make it into a program retrospective.

The next article starts where the score begins, with the question organizations measure least accurately and the answer that almost always surprises them.

Other articles

The Model Isn’t the Problem

The Model Isn’t the Problem

Data Governance
Best Practices
Healthcare AI pilots stall before reaching production. The model is rarely the issue. The gap between training data and production data is what breaks deployment.
Starting with Everything Is a Good Way to Fix Nothing

Starting with Everything Is a Good Way to Fix Nothing

Data Governance
Best Practices
Why cataloging everything is the fastest way to ensure your data initiative delivers nothing. A practical approach to scoping data catalogs for healthcare organizations.
Brittle Data Has a Cause. Data Malleability Is the Cure.

Brittle Data Has a Cause. Data Malleability Is the Cure.

Data Governance
Data Value Realization
Data debt names what went wrong. Data Malleability names the capability that prevents it. This article introduces a new framework for building data that absorbs change instead of fracturing under it.
Client testimonial
The Definian team was great to work with. Professional, accommodating, organized, knowledgeable ... We could not have been as successful without you.
Senior Manager | Top Four Global Consulting Firm

Partners & Certifications

Ready to unleash the value in your data?