Why AI training fails when meaningful voices aren’t in the room

At the 2025 AI Summit London's medicine panel in June, Patty O'Callaghan of Charles River Laboratories discussed the trust deficit with doctors working with AI outputs. Many medical professionals are resistent to working with LLMs not because the technology isn't sophisticated or can't add value, but often because they've read about hallucinations in the news and nobody's taken the time to explain how these tools actually work, when they're reliable and when they're not. These aren't technology failures, but failures of imagination about who needs to be in the room when we design AI training.

Mark Barber from AstraZeneca offered a solution: they brought radiologists directly into their development team as integral members shaping how AI would work in their actual context. The trust followed naturally because the radiologists understood not just what the AI did, but why it made the choices it made.

This mirrors something I discovered running a recent research festival at my university. When we used a pilot of a digital feedback platform, Obwob, to gather live questions during my talk, engagement soared as we made diverse perspectives visible in real time. The same platform, deployed passively next to research posters with QR codes, gathered exactly zero responses. The technology was identical; the difference was whether people felt their input genuinely mattered.

The blockages hiding in plain sight

Traditional AI training focuses on capabilities: here's what the tool can do, here's how to prompt it, here's the interface. But the real barriers to adoption aren't technical; they're human, organisational and often invisible until you create space for them to surface.

I'm seeing this play out across different sectors. In universities, humanities academics resist AI not because they can't understand it, but because they have genuine ethical and environmental concerns that aren’t being meaningfully addressed. When staff have spent decades perfecting their workflows, asking them to adopt AI feels like being told their expertise is suddenly obsolete.

One of the most insidious blockage is organisational silos. As Michael Houlihan of Generation UK and Ireland noted at the Summit, organisations often approach AI through hundreds of isolated use cases, each pursued independently. Leanne Allen argued against this approach: 'We have 100s of use cases, take them by priority' invariably leads to fragmented implementation. What's needed is a capability approach: understanding workflows holistically and identifying where AI can genuinely add value across connected processes.

Bottom-up over top-down

There's a cognitive mismatch at the heart of most AI training: senior leadership, advised by consultants who've never done the actual work, decide which AI tools to implement. They purchase licenses, mandate training sessions and wonder why adoption remains anaemic. Meanwhile, the people on the ground doing the work could identify exactly where AI would help, if anyone asked them.

Kevin Roose and Casey Newton on their Hard Fork podcast recently highlighted research showing that the most successful AI implementations come from employees identifying their own use cases. This wasn’t because they understand AI better, but because they understand their own work better. They know where the friction points are, where time gets wasted, where human creativity is being squandered on mechanical tasks.

This is what Michael Houlihan at the Summit meant when he suggested each team should identify one workflow per month that could benefit from AI. Not imposed from above, but discovered from within. It's the difference between training that teaches people to use tools they don't want and training that helps them solve problems they actually have.

Lessons from interdisciplinary research

Ten years ago, leading the Wellcome Trust-funded Tarrare project taught me something crucial about bringing diverse perspectives together. We had a pathologist and medical students approaching historical evidence literally (was there really a golden fork?) while artists approached it metaphorically (what does the golden fork represent?). These weren't communication failures, they were fundamentally different cognitive frameworks for understanding the same information.

The breakthrough came not from choosing one perspective over the other, but from creating structured encounters where each way of knowing could contribute to a richer understanding. Medical students suddenly saw medical history through new eyes, the pathologist discovered how metaphor could shift diagnostic assumptions, and the artists began to understand how bodies are viewed scientifically, which altered how the show took shape.

The parallel to AI training is direct. Technical teams think in terms of accuracy rates and processing speeds. End users think about whether this will make their job harder. Finance thinks about ROI. Customers think about whether they're now talking to a machine. Successful AI training doesn't try to make everyone think the same way, it creates frameworks where these different perspectives can productively collide.

The real work of inclusive AI training

Getting diverse voices in the room isn't about token representation or stakeholder box-ticking. It's about fundamentally restructuring how we approach AI education. Instead of information dissemination ('Here's how to write a prompt'), we need facilitated discovery ('What problems do you face that AI might address?').

This seems like more work initially, but it's actually a profound economy. Training that surfaces real barriers and addresses genuine concerns doesn't need repeating every six months. When employees learn to identify AI opportunities in their own work rather than being told where to apply it, they keep doing so as the technology evolves. You're not teaching tools, you're building capability.

In practice, this means structuring training as collaborative workshops where small teams develop solutions for their specific contexts, and creating safe spaces for people to voice concerns about job security, ethical implications and practical barriers. It means gathering real-time feedback during training – not satisfaction surveys afterwards, but ongoing dialogue about what's actually preventing adoption.

The path forward

Successful AI adoption happens when we start with the problems people actually have. Include voices from across the organisation, especially the sceptics and resisters. Address the ethical concerns, the job security fears and the practical barriers head-on. Create frameworks for ongoing adaptation rather than one-off training events. Most importantly, recognise that the people closest to the work often understand better than anyone where AI could genuinely add value, and where it would only add complexity. Their insights, concerns and resistance aren't obstacles to AI adoption, they're the data needed to get it right.

***

Laura Gates, PhD a research-led consultant and academic specialising in creative methods for impact, presence-led engagement and human–system interaction, working with organisations to embed reflection and evidence-gathering in live contexts. Read more at lauragates.io

Previous
Previous

From ELIZA to GPT: How the uncanny shaped human-AI interaction's hidden history

Next
Next

Two days at AI Summit London