Stuck in the AI Quagmire
Great pilots die in the mud. Culture, not checklists, is the way out.
It’s been a busy week of back-and-forth conversations. All AI. We’re talking executives from international banks, digital marketing firms, drug companies, insurers, and software builders. Executive education participants and MBA candidates at Booth, and consulting clients on the phone. Different industries, same refrain: AI feels stuck.
They’ve poured resources into pilots. They’ve hired vendors, spun up proofs of concept, and tested models in pockets of the business. And yet—they find themselves stranded at the end of a trail of POCs, sinking in a swamp of wasted time, money, effort, and organizational patience.
They describe the same challenges in different circumstances:
Slow starts that never gain momentum.
Stovepiped projects that bloom unevenly and never connect.
Inconsistent approaches that confuse more than clarify.
New risks—bias, privacy, hallucination—that multiply faster than they can be named, let alone managed.
Even the wins feel hollow. A “great pilot” doesn’t mean much if you can’t scale it. Leaders ask: What can we trust? Where do we start? Who should be on the team? How formal should it be? Can our people actually do this? Are our AI hopes and dreams even real?
For some of them, the natural reflex is to grab for templates and forms. If things feel messy, maybe a new intake sheet will fix it. Maybe a heavier framework will give the illusion of control. As a consultant, all to happy to help. But: Discovery turned into paperwork isn’t discovery at all. It’s busywork masquerading as progress.
And that’s the paradox: Too risky to start, so people lie down. In trying to manage the risk, too many organizations squeeze out the possibility. Excitement becomes exhaustion, clarity begets clutter, and they go from movement to meetings to meh.
AS WE KNOW: The problem isn’t lack of ideas or lack of ambition. And we’ve talked about how to get traction when you’re starting. The next problem is finding the right level of scaffolding.
Scaling AI shouldn’t be about a thousand disconnected pilots. It should be about a shared capability—rails the whole enterprise can run on. When teams know what “good” looks like, when risks are named and monitored, when roles are clear and adoption is planned, then AI doesn’t feel like chaos. It feels like momentum.
That’s not bureaucracy. That’s culture.
Executives don’t need another binder of checklists. They need a story they can believe in. A story where their people rise, not because they’re superheroes, but because the scaffolding makes ordinary teams extraordinary.
The real question isn’t can AI deliver? It’s can you create the conditions for it to matter? And cultivate it without killing it?
That shift—from pilots to purpose, from forms to conversations, from hopes to impact—is how AI moves from promise without progress to progress with lasting value.
More to say on that. Stay tuned.
— James.


