Your Head of Finance built a budget variance tool in Claude three months ago. It pulls live data from your ERP. IT found out about it last week. Down the hall, your Marketing Ops lead has a campaign attribution model running through an API key she set up herself. Your VP of Sales is scoring pipeline with something he wired together over a long weekend.
None of them asked for permission. None of them are reckless. They had a deadline and no approved alternative — and they made a reasonable decision given those constraints. The conversation happening in your leadership team right now is probably some version of “how do we stop this?” That’s the wrong question. The right one is: why did they need to?

The Sprawl Numbers Are Worse Than You Think
94% of enterprises now flag AI sprawl as a top concern, according to OutSystems’ 2026 State of AI Development report — a survey of 1,900 IT leaders. The number is striking, but the detail underneath it is more useful: only 12% of those organizations have implemented a centralized platform to manage it. The other 88% are running reactive controls — audits, bans, review cycles — on a problem that’s already past them.
28% of enterprises now run more than 10 different AI apps. 66% plan to add more over the next 12 months. Only 35% of enterprise leaders say the AI tools in their organization actually go through proper approval channels. That means roughly 65 cents of every dollar spent on AI tools is funding infrastructure IT has never reviewed.
The piece that reframes everything: the people building outside governance aren’t junior employees experimenting. Retool’s 2026 Build vs. Buy Report, which surveyed 817 builders across engineering, ops, IT, and data roles, found that 64% of shadow builders — those who built something outside official IT channels — were senior managers and above. These are experienced operators making a deliberate choice — not because they don’t understand the rules, but because the rules offer them nothing fast enough to be useful.
76% of enterprises have experienced at least one negative outcome because of disconnected AI. Security incidents. Duplicate tooling. Data that doesn’t reconcile across teams. The sprawl has real costs. But the cause isn’t employee behavior.
Why Every Fix So Far Has Made It Worse
The standard response to AI sprawl follows a predictable playbook: issue a policy, centralize procurement, run an audit, ban the tools IT didn’t approve. Each of these treats the symptom.
Banning tools doesn’t remove the underlying need. It pushes it further underground and makes the next audit more surprising. Centralized procurement adds 3–6 month cycles to problems that a competent operator can solve in 3 days with the tools they already have. Shadow AI audits catch the sprawl after the fact — they don’t offer a faster sanctioned path, so they don’t change the underlying decision your ops leaders are making.
Only 12% of enterprises have actually built a centralized platform to govern AI. The other 88% are relying on reactive controls. Reactive controls, applied to a problem caused by speed asymmetry, don’t work. They just make the approved path slightly more unpleasant without making it any faster.
Most governance approaches assume the problem is employee behavior. The real gap is structural — there’s no approved path fast enough to compete with the shadow one. A different approach treats speed as a governance requirement, not a trade-off against it. That distinction matters, because you can’t audit your way to a faster approved path.
The Real Problem Is a Missing Answer to One Question
At the center of every AI sprawl problem is a question that nobody in the organization has formally answered: what should business teams be allowed to build, and where should they build it?
Without a clear answer, every team defaults to whatever’s fastest. For most business ops leaders, that’s a consumer-grade AI tool, a third-party SaaS product, or a spreadsheet with an API key duct-taped to it. Each one works for the immediate problem. None of them are governed. None of them talk to each other. And the cost compounds.
30% of enterprise leaders admit they’re wasting money on redundant AI software. That’s the downstream price of leaving the question unanswered — not malicious spending, just 40 people in separate departments solving the same problem 40 different ways because there was no shared answer to “where do we build this?”
The sprawl isn’t evidence of undisciplined employees. It’s evidence of a vacuum. And a vacuum IS a policy — just not the one you’d choose if you were making it intentionally.
What a Sanctioned Fast Path Actually Looks Like
The companies getting this right aren’t running faster audits. They’re not adding more governance steps or tighter procurement rules. They’re changing the architecture.
The insight is simple: the approved path has to be faster than the shadow path, or no one takes it. When your procurement process takes longer than an employee’s patience, you’ve already lost. The answer isn’t to make the shadow path slower. It’s to make the approved path actually usable.
What that looks like in practice: business teams build inside infrastructure that’s already IT-approved — so governance happens at the platform level, not in a review queue afterward. No ticket. No 6-month wait. No data leaving the organization’s control. This is what Peridot is built for: Marketing Ops, Sales Ops, Finance teams ship real AI-powered tools inside their company’s VPC, without an IT ticket, without shadow AI risk. The tool is sanctioned by design, not by retrospective audit.
The difference between “IT reviews what you built” and “the environment you built in was already IT-approved” is the difference between a governance process and a governance architecture. One creates a bottleneck. The other removes the tradeoff.
The conversation worth having with your leadership team isn’t “how do we stop shadow AI?” It’s: why is the shadow path faster than the sanctioned one — and what do we do about that?
When your most experienced operators are consistently choosing the unsanctioned path, that’s not a discipline problem. That’s signal. The sprawl is the org telling you something your AI strategy hasn’t answered yet.
If you’re seeing this in your org, worth a conversation.