The scaling crisis: Why AI training fails when diverse voices aren’t in the room
The statistics haunt every AI conference: the majority of AI projects fail to scale beyond pilot stage. We keep designing AI training as if it's about teaching people to use tools, when actually it's about uncovering why they won't.
The warning signs are everywhere. An HR manager successfully using LLMs for her own work while her organisation remains oblivious to AI's potential. Insurance workers feeding sensitive financial data into chatbots without understanding what happens to that information. A university department implementing AI essay marking, unaware that students are already mobilising against what they see as automated assessment of their expensive education.
These aren't technology failures. They're failures of imagination about who needs to be in the room when we design AI training.
The trust deficit nobody's measuring
At the AI Summit London's medicine panel in June, Patty O'Callaghan of Charles River Laboratories laid bare an uncomfortable truth: doctors don't trust AI outputs. Not because the technology isn't sophisticated or can't add value, but often because they've read about hallucinations in the news and nobody's taken the time to explain how these tools actually work, when they're reliable and when they're not.
Mark Barber from AstraZeneca offered a telling solution: they brought radiologists directly into their development team. Not as consultants or validators, but as integral members shaping how AI would work in their actual context. The trust followed naturally because the radiologists understood not just what the AI did, but why it made the choices it made.
This mirrors precisely what I discovered running a recent research festival at my university. When we used a digital feedback platform to gather live questions during my talk—making diverse perspectives visible in real time—engagement soared. The same platform, deployed passively beside research posters with QR codes, gathered exactly zero responses. The technology was identical. The difference was whether people felt their input genuinely mattered.
The blockages hiding in plain sight
Traditional AI training focuses on capabilities: here's what the tool can do, here's how to prompt it, here's the interface. But the real barriers to adoption aren't technical; they're human, organisational and often invisible until you create space for them to surface.
I'm seeing this play out across different sectors. In universities, humanities academics resist AI not because they can't understand it, but because they have genuine ethical and environmental concerns that aren’t being meaningfully addressed. When staff have spent decades perfecting their workflows, asking them to adopt AI feels like being told their expertise is suddenly obsolete. These aren't training issues, they're change management crises masquerading as skills gaps.
The most insidious blockage is organisational silos. As Michael Houlihan of Generation UK and Ireland noted at the Summit, organisations often approach AI through hundreds of isolated use cases, each pursued independently. Leanne Allen argued forcefully against this approach: 'We have 100s of use cases, take them by priority' invariably leads to fragmented implementation. What's needed is a capability approach: understanding workflows holistically and identifying where AI can genuinely add value across connected processes.
Why bottom-up beats top-down
There's a cognitive mismatch at the heart of most AI training. Senior leadership, advised by consultants who've never done the actual work, decide which AI tools to implement. They purchase licenses, mandate training sessions and wonder why adoption remains anaemic. Meanwhile, the people doing the work could tell you exactly where AI would help—if anyone asked them.
Kevin Roose and Casey Newton on their Hard Fork podcast recently highlighted research showing that the most successful AI implementations come from employees identifying their own use cases. This wasn’t because they understand AI better, but because they understand their own work better. They know where the friction points are, where time gets wasted, where human creativity is being squandered on mechanical tasks.
This is what Michael Houlihan at the Summit meant when he suggested each team should identify one workflow per month that could benefit from AI. Not imposed from above, but discovered from within. It's the difference between training that teaches people to use tools they don't want and training that helps them solve problems they actually have.
Lessons from unlikely places
Ten years ago, leading the Wellcome Trust-funded Tarrare project taught me something crucial about bringing diverse perspectives together. We had a pathologist and medical students approaching historical evidence literally (was there really a golden fork?) while artists approached it metaphorically (what does the golden fork represent?). These weren't communication failures, they were fundamentally different cognitive frameworks for understanding the same information.
The breakthrough came not from choosing one perspective over the other, but from creating structured encounters where each way of knowing could contribute to a richer understanding. Medical students suddenly saw medical history through new eyes, the pathologist discovered how metaphor could shift diagnostic assumptions, and the artists began to understand how bodies are viewed scientifically, which altered how the show took shape.
The parallel to AI training is direct. Technical teams think in terms of accuracy rates and processing speeds. End users think about whether this will make their job harder. Finance thinks about ROI. Customers think about whether they're now talking to a machine. Successful AI training doesn't try to make everyone think the same way, it creates frameworks where these different perspectives can productively collide.
The real work of inclusive AI training
Getting diverse voices in the room isn't about token representation or stakeholder box-ticking. It's about fundamentally restructuring how we approach AI education. Instead of information dissemination ('Here's how to write a prompt'), we need facilitated discovery ('What problems do you face that AI might address?').
This seems like more work initially, and organisations resist what appears to be a more expensive approach. But it's actually a profound economy. Training that surfaces real barriers and addresses genuine concerns doesn't need repeating every six months. When employees learn to identify AI opportunities in their own work rather than being told where to apply it, they keep doing so as the technology evolves. You're not teaching tools; you're building capability.
In practice, this means structuring training as collaborative workshops where small teams develop solutions for their specific contexts, and creating safe spaces for people to voice concerns about job security, ethical implications and practical barriers. It means gathering real-time feedback during training—not satisfaction surveys afterwards, but ongoing dialogue about what's actually preventing adoption.
The path forward
The high failure rate isn't inevitable. But avoiding it requires us to stop treating AI training as a technical challenge and start treating it as a human one. Every failed pilot I've encountered had the same DNA: technology chosen without understanding the problem, training delivered without addressing concerns, implementation attempted without involving the people who would actually use it.
Successful AI adoption happens when we flip the script. Start with the problems people actually have. Include voices from across the organisation, especially the sceptics and resisters. Address the ethical concerns, the job security fears and the practical barriers head-on. Create frameworks for ongoing adaptation rather than one-off training events.
Most importantly, recognise that the people closest to the work often understand better than anyone where AI could genuinely add value, and where it would only add complexity. Their insights, concerns and resistance aren't obstacles to AI adoption, they're the data we need to get it right.
The question isn't whether your organisation will adopt AI; market pressures will ensure that. The question is whether you'll be part of the minority that successfully scales beyond pilots, or the majority that discovers too late that they forgot to include the people who actually do the work.
***
Laura Gates, PhD combines 20+ years of research leadership with hands-on software engineering experience. As an AI Training & Adoption Strategist she partners with AI consultancies and works directly with organisations to provide meaningful and effective training design, stakeholder engagement, and the cultural shifts that determine the effectiveness of AI initiatives. Find out more lauragates.io