Will AI Ever Play in Peoria? The Enterprise Reality Check
3. AI Coding Assistants: Impressive Stats, Questionable Reality
Now, for the elephant in the room: AI coding assistants. Microsoft claims 30% of its code is AI-generated; Salesforce says 50% of its work is AI-driven. Impressive numbers—if you take them at face value.
But talk to actual developers, and you’ll hear a different story. To quote: “If you count every nonsensical autocomplete suggestion I immediately delete, sure, maybe 30% of my ‘code’ is AI-generated.”
The disconnect here is telling. Silicon Valley loves metrics that sound impressive— even if they’re meaningless in practice. AI can certainly help develop code, but it’s far from the silver bullet it’s claimed to be.
Why AI Isn’t Working in the Real World (Yet)
This finally brings me to the old vaudeville question—“Will it play in Peoria?” (For those unfamiliar with the phrase, it was a test of mainstream viability.) Today, we might be asking, “Will this AI solution actually work for a mid-market manufacturer in Ohio?”
Too often, the answer is no. Here’s why:
1. The “We Know Better” Fallacy
Tech vendors have a bad habit of presuming they understand business workflows better than the businesses themselves. The result? Solutions that look great in demos but fail in the wild.
Example: An AI procurement tool might optimize for cost—but if it doesn’t account for supplier relationships or lead times, it’s worse than useless.
2. The Process Problem
AI models thrive on clean, predictable data. But real-world business processes are messy, human-driven, and full of exceptions.
Example: An AI sales assistant might generate perfect email drafts—but if it can’t read the room (or the client’s tone), it’s a liability.
3. The Vibes-Based Approach
Some argue that AI can “figure out” optimal processes by generating thousands of simulations. The theory? Throw enough data at the wall, and the best path will emerge.
The reality?
♦ Unguided AI produces as much noise as insight.
♦ Businesses need repeatability, not just experimentation.
♦ Synthetic data ≠ real-world nuance.
In short, AI can mimic—but it can’t contextualize.
How to Fix Enterprise AI (Hint: It’s Not More Hype)
If we want AI to work outside Silicon Valley, then we need to:
♦ Start with the problem, not the tech. Stop asking, “What can AI do?” and start asking, “What’s broken, and how do we fix it?”
♦ Observe real workflows. Spend a week (or more) working in a distribution center before automating it.
♦ Pilot in the wild. Lab results don’t matter—only real-world performance does.
♦ Augment, don’t replace. AI should assist humans, not pretend to replace them.
At the end of the day, AI isn’t failing because the tech isn’t advanced enough (a topic I have covered in this column before)—it’s failing because we’re applying it to the wrong problems in the wrong ways. Until we shift focus from hypedriven demos to outcome-driven solutions, AI will remain a solution in search of a problem. And if that doesn’t change? Well, let’s just say that Peoria isn’t holding its breath.
In short, AI can mimic—but it can’t contextualize.