The question before the tools: What 20 years of research leadership reveals about AI strategy
In 2015, I found myself in a room with a pathologist, a disability historian, medical students and theatre artists, all staring at the same problem from completely different angles. We were exploring the case of Tarrare, an 18th-century French medical anomaly (he couldn’t stop eating) who died convinced that a golden fork lodged in his stomach was killing him. When his doctor performed the autopsy, no golden fork was ever found.
The Depraved Appetite of Tarrare the Freak - exploring medical authority through interdisciplinary collaboration between artists, medical professionals and historians.
The artists in the room approached this metaphorically—what did the golden fork represent about medical authority, patient experience, the limits of understanding? The pathologist approached it literally—was there actually something there that his doctor Baron Percy missed in 1798, given the limitations of 18th-century autopsy techniques?
What emerged from that collision of perspectives changed how the pathologist thought about his own discipline. He began to see 'searching for the golden fork' as a metaphor for when medicine and science ask the wrong questions entirely—looking for definitive physical evidence when the real issue might lie elsewhere.
This moment encapsulates something I've learnt from 20 years of leading complex research projects: the most critical work happens before you choose your tools or methods. It's about ensuring you're asking the right questions in the first place.
The problem with starting with solutions
I'm seeing organisations make the same mistake with AI that I've watched research teams make for decades—jumping to solutions before properly defining the problem. In higher education, for instance, I observe two equally problematic approaches: complete avoidance of AI tools despite approximately 88% of students already using them, or immediate adoption of AI essay-marking systems without questioning what assessment is actually meant to achieve.
Both approaches skip the fundamental question: What are we trying to accomplish, and is AI the right way to get there?
The pathologist in that room initially wanted a literal answer about Tarrare's golden fork. The artists wanted to explore symbolic meaning. Neither approach was wrong, but productive collaboration only began when we stepped back to ask: what does this case study reveal about the relationship between medical authority and patient experience? That reframing allowed each discipline to contribute its unique perspective towards a shared understanding.
Why problem definition is collaborative work
During the Tarrare project, funded by the Wellcome Trust and Arts Council England, I coordinated across institutions in multiple cities, working with medical students at University of Bristol, pathologists at University College London, disability historians at Swansea University and arts practitioners. The biggest challenge wasn't logistics or budget—it was getting people from radically different professional cultures to articulate research questions that made sense across all disciplines.
Scientists and medical professionals often seek definitive, measurable answers. Humanities scholars and artists explore ambiguity, context and meaning. Neither approach is superior, but productive collaboration requires frameworks that allow both perspectives to inform the same problem.
The parallels to AI implementation are striking. Technical teams focus on functionality and efficiency metrics. Business stakeholders think about process improvement and cost reduction. End users care about workflow integration and job impact. Success requires bringing these perspectives together around shared questions, not starting with predetermined solutions.
What works: Structured creative practice
I developed methodologies that moved beyond standard meetings to include staged discussions where different expertise could shine independently while generating collaborative insights. For the Tarrare project, this meant creating spaces where medical students could respond to performance snippets, where pathologists could react to artistic interpretations, where historians could provide context for contemporary artistic choices.
The breakthrough came not from compromise but from structured encounters between different ways of knowing. One medical student told me they suddenly understood medical history in a completely new way. The pathologist wrote about how the golden fork metaphor shifted his thinking about diagnostic assumptions.
This approach translates directly to AI strategy. Rather than having IT departments choose tools and then train users, or having business units demand AI solutions without understanding technical constraints, organisations need structured ways for different stakeholders to examine the same challenges from their unique perspectives.
The right questions for AI integration
Before selecting any AI tool, organisations should create collaborative frameworks to explore:
What problem are we actually trying to solve? Not 'How can we use AI?' but 'What specific challenge does AI help us address better than current approaches?'.
Who are the stakeholders affected by this problem? Include voices from technical teams, end users, customers and anyone whose work will change.
What does success look like from each stakeholder perspective? Technical success, business success and user success may be entirely different things.
What are we willing to change about our current processes? AI integration often requires workflow redesign, not just tool substitution.
What are the unintended consequences we haven't considered? Like students switching universities when they discover AI marking their essays—a predictable response that could have been identified through proper stakeholder consultation.
Beyond the golden fork
The golden fork was never found because Tarrare died of tuberculosis, not mysterious internal objects. But that literal fact misses the point. The search for the golden fork became a lens for understanding medical relationships, patient experience and the limits of diagnostic authority.
Similarly, the question isn't whether AI tools can solve your problems—it's whether you understand your problems well enough to evaluate any solution, AI or otherwise.
After two decades of managing complex projects across disciplines, institutions and professional cultures, I've learnt that the most sophisticated technology fails when applied to poorly understood problems. The most successful projects begin with collaborative problem definition that allows different perspectives to inform shared questions.
That's the work that happens before you choose the tools, where the real strategy lives.
***
Laura Gates, PhD combines 20+ years of research leadership with hands-on software engineering experience. Her AI Integration Consultancy helps organisations develop thoughtful approaches to AI adoption that enhance rather than replace human capabilities. Find out more