Today’s artificial intelligence (AI) is deeply embedded in our lives, increasingly producing and acting upon cultural outputs including language, images, and narratives but these systems often fail when nuance, context, and interpretive complexity matter most.
This could be:
- in healthcare, where patients’ rich experiences are flattened into data points thus missing key symptoms or treatment options
- in climate action, where local nuance and knowledge can be lost in broad generalisations leading to generic solutions that don’t work in local contexts
- or in democratic debate, where AI amplifies certain voices over others
The consequences are real.
Overcoming limitations
This blog introduces a new international research call, inviting interdisciplinary researchers working in and between the humanities and technical disciplines to overcome the limitations of existing AI technologies in settings where nuance and context are vital.
We hope to develop AI systems designed to respond to how meaning varies across communities and contexts, embracing multiple perspectives rather than imposing single interpretations.
A research sandpit co-funded by the Arts and Humanities Research Council and the Social Sciences and Humanities Research Council of Canada will bring communities together through a distinctive collaborative process.
Facilitated dialogue
Central to this are intensive workshops where teams and themes develop through facilitated dialogue.
These workshops are generative experiences, offering opportunities to work in new ways across disciplinary boundaries with expert mentorship and peer support.
Through the workshop process, new project teams and research propositions will be co-developed and funded.
A threshold moment
We are at a critical juncture in AI development. Current systems are deployed across sensitive domains yet operate with narrow definitions of intelligence that risk marginalising complex understandings of human experience and cultural meaning.
What if AI development could be informed by deep understanding of how people make meaning across cultural and social contexts?
This is the vision behind interpretive technologies, research that integrates perspectives from the humanities and arts directly into how AI systems are designed, trained, and evaluated.
It focuses on how AI engages with meaning, for instance, how meanings shift across cultures, rather than interpretable AI, which seeks transparency through simpler, understandable models.
This is not about making AI more ‘human’, but about developing systems that can engage appropriately with diverse contexts, while remaining computationally effective.
It recognises the vital contribution humanities and arts can make to a technology that is ever more integrated into daily life.
An opportunity across boundaries
This funding represents a significant opportunity for researchers across multiple domains.
For humanities and arts scholars, this is a chance to shape AI’s foundations.
For computer science, engineering, and AI researchers, this offers pathways beyond current technical plateaus through diverse reasoning paradigms.
This creates opportunities to pioneer new collaborative territory that transcends existing interdisciplinary boundaries for those already working across disciplines, including:
- digital humanities
- responsible AI
- human-computer interaction
- AI arts, science and technology studies
What we are seeking
We invite expressions of interest from individual researchers.
You should not have a pre-formed team or fully developed project idea, the sandpit aims to foster collaboration that pushes beyond current approaches.
We welcome applications from both early-career and experienced researchers.
In the case of more established applicants, we actively encourage proposals that align with, extend, or leverage the resources of existing research programmes or labs.
Projects emerging from the sandpit will:
- develop functioning prototypes that show measurable improvements in AI systems’ interpretive capabilities
- create new evaluation frameworks and benchmarks that assess cultural sensitivity, contextual awareness, and perspectival reasoning
- integrate humanities and arts methodologies into technical development pipelines, showing direct influence on system design
- contribute to an open ecosystem of tools, methods, and resources for interpretive AI
Ethical foundations, not afterthoughts
This research positions ethical considerations at its core.
We recognise significant risks: more sophisticated interpretive AI could enable cultural appropriation at scale, targeted manipulation, or displacement of human interpretive roles such as curators and cultural analysts.
The goal is not simply more powerful AI, but more responsible development that embeds safeguards, community consent, and human agency into system architecture from the outset.
This initiative sits alongside important work including the Bridging Responsible AI Divides and Responsible AI UK programmes on responsible AI, and considers how ethical considerations can directly inform technical design choices and evaluation frameworks.
Central to this is developing AI with and for communities rather than imposing technical solutions upon them.
Real-world applications
Research in this direction could transform the following critical domains.
Climate action
AI that bridges global scientific models with local cultural knowledge and implementation realities, ensuring interventions make the most of local know-how and are appropriate to local realities thus maximising efficiency and adoption.
Healthcare
Systems preserving narrative complexity rather than reducing experiences to standardised data, enabling healthcare professionals to provide tailored interventions thus increasing positive patient outcomes.
Democratic engagement
AI working with diverse perspectives rather than amplifying dominant voices thus providing a more representative portrayal of opinions and viewpoints.
Critical timing
The UK government positions AI technology at the heart of the UK’s future prosperity and sets out a vital role for AI in health diagnostics and in the prediction and mitigation of the effects of climate change.
Meeting these ambitions depends on how AI systems are built to handle meaning and context.
This is a critical moment to shape AI’s foundations, early design choices will steer its trajectory for years to come.
Ready to shape AI’s future?
This sandpit is part of the Doing AI Differently initiative, which has engaged more than 150 researchers across six continents in exploring how interpretive perspectives can inform AI development.
Whether you’re exploring technical development from humanities perspectives, seeking new approaches to persistent AI challenges, or ready to pioneer entirely new collaborative territory, the sandpit offers a unique opportunity to influence AI’s development at a pivotal moment.
The sandpit pre-announcement and white paper launched in July 2025. Expressions of interest are open now.