Two days at AI Summit London
AI Summit London revealed an industry in transition, moving from experimental enthusiasm to the messier realities of implementation, ethics and what it means to integrate new technologies without re-entrenching existing systems of power. Across the stages and sessions, the conversations kept circling back to the same fundamental challenges: how do we move beyond pilot projects to meaningful organisational change, and what does genuine human engagement with AI actually look like?
The strategy gap becomes visible
What struck me most was the growing recognition of a training and implementation gap. Multiple speakers acknowledged that while there's no shortage of high-level strategy, the practical work of helping organisations navigate the human and cultural barriers to AI adoption remains underdeveloped.
What I kept sensing was a gap in addressing the realities of actual workers and their existing, habit-entrenched, already-at-capacity workflows. One speaker from the medicine panel noted that doctors don't trust the results they're getting from AI models, while another highlighted how over 80% of AI projects fail because the right data isn't available at the right time. This is partly a technical problem, but it's fundamentally about how humans and organisations interact with new technologies in practice.
Human-AI collaboration, not replacement
Across every stage, speakers emphasised that AI should enhance rather than replace human capabilities, with a consistent focus on keeping humans in the loop, particularly for high-risk decisions.
The recent adage 'AI will not take your job; someone who understands AI better than you will' came up repeatedly. It captures how the challenge facing organisations isn't wholesale job replacement, but the harder work of developing what one speaker called 'bilingual' skills that combine domain expertise with AI fluency. The discussions around agentic AI were revealing here. While these systems can operate more independently than earlier AI, speakers consistently stressed the need for new governance frameworks and meaningful human oversight. Autonomy still requires thoughtful human architecture.
The data foundation problem
Perhaps the most sobering theme was the persistent challenge of data quality and governance. Multiple panels identified poor data management as the primary reason AI projects fail to scale beyond the pilot stage, which shifts the focus from clean datasets to deeper questions of data strategy and organisational maturity. One speaker highlighted the challenge of integrating systems across countries, and data silos emerged as a recurring practical barrier. What was encouraging was that speakers increasingly framed this as an organisational design challenge rather than a purely technical one.
The voice problem
One theme that resonated particularly with my own research was the repeated emphasis on getting diverse voices into the room during AI development. Multiple speakers stressed the importance of including perspectives from disenfranchised communities in AI system design. If we don't, as one speaker put it, the 'intelligence' in AI risks being limited to overwhelmingly white, male, Western frameworks of what intelligence means.
But there's a significant gap between recognising this need and having workable methods for achieving it. As one speaker noted, we need 'an environment where actual users can be part of the design of these systems', and the how of that remains underdeveloped. This connects directly to my work on impact measurement and feedback collection in live contexts – the challenge isn't just getting people in the room, it's creating structured ways for different stakeholders to contribute meaningfully to shared problems.
Moving beyond technological solutions
What became clear across two days is that AI's most pressing challenges are human, cultural and structural rather than purely technical. The technology continues to advance rapidly, but whether it actually changes anything depends on addressing the human factors: how people build trust in new tools, how organisations shift their habits, how workers develop new fluencies without being overwhelmed, how all of this can be approached ethically and thoughtfully, without re-entrenching existing systems of power and exploitation.
The most interesting work, it increasingly seems, happens not in the models themselves but at the intersection of technical capability and human systems, in the unglamorous, slow work of helping people and institutions actually change in meaningful and ethical ways.
***
Laura Gates, PhD is a research-led consultant and academic specialising in creative methods for impact, presence-led engagement and human–system interaction, working with organisations to embed reflection and evidence-gathering in live contexts. Read more at lauragates.io