How the arts and humanities are crucial to responsible AI

Colorful abstract interwoven line pattern

Artificial intelligence (AI) technologies are advancing fast. How can the arts and humanities, and a new UKRI programme, help create a responsible AI ecosystem?

The recent, much reported emergence of commercially marketable AI technologies like ChatGPT, AlphaFold, DALL-E and LaMDA heralds a vast new source of social power. Of course, today’s AI systems are not machine minds, but mathematical tools that appear intelligent only by extracting useful patterns from data created by intelligent humans.

But it turns out that we can do a lot with this kind of borrowed intelligence – we can ask a machine to crack the molecular code of protein structures, to generate clever new poems and essays, to create virtually any kind of image we can describe, to synthesise new sounds and videos on demand, even write new computer code for other machines. It remains to be seen what else we might do.

But how do we – our societies – do any of this responsibly? What would that really mean? Who gets to define the new field of ‘responsible AI’ research and innovation? How do we weave it through an increasingly complex AI ecosystem? How do we unite the many different actors in that ecosystem in this effort, and hold one another accountable for it?

That’s the great challenge that the UK Research and Innovation’s (UKRI) Arts and Humanities Research Council (AHRC) has funded us to meet with our new BRAID (Bridging Responsible AI Divides) programme. BRAID, which was initially announced during London Tech Week one year ago, will link researchers, developers, users, policymakers, regulators and impacted publics in a coordinated effort to enable a responsible AI ecosystem to flourish in the UK. The effort will also inject fresh insight and guidance from voices in the arts and humanities that are often excluded from conversations about how AI’s new power can be equitably shared, justly used, and socially legitimised.

AI as social power

Why is that conversation necessary? AI is already being embraced, celebrated, contested, feared, hyped, shared, hoarded, misunderstood, studied, used and misused by people all over the world. But every vast new social power that we invent, from the written word to the steam engine, must be guided and shaped by humane values if we expect it to enable human flourishing. Values like justice, openness, honesty, creativity, care, wisdom and responsibility must continually find new expression in our techniques and artefacts. Every time we remake the world with our ingenuity, we also have to reinscribe in that world the values that will sustain us.

The use of a social power like AI – one that can so easily be used to uplift, unify and liberate or to impoverish, divide and oppress – also requires collective deliberation and shared governance in order to be morally and politically legitimised. Power that is opaque, unconstrained, unaccountable to those subject to it, and wielded by the few over the many is nothing new, yet democratic societies decided some centuries ago that powers of this kind cannot be justified.

A great divide

This is the immense challenge with AI that we now face here in the UK, and around the world. It’s not a technical or scientific challenge; it is a moral and political one. Yet it cannot be met without bringing moral and political wisdom deep into the heart of technology. This is very hard to do, because modern educational systems have enforced an artificial and harmful split between humane knowledge – our understanding of human values, history, culture and ideals – and human technical capability – how to design, make and sustain the built world.

Areas of red and blue split by a jagged crack

Credit: filo, DigitalVision Vectors via Getty Images

While the creative arts remain a vital but fragile link, continually reweaving the threads of technique into our explorations of humane visions and values, we largely divide our society – and our children’s sense of their future potential – between the ‘STEM’ disciplines (science, technology, engineering and mathematics) and skills of moral, political and creative judgement. That the latter are seen as ‘soft’ skills despite being equally fundamental to the construction of liveable societies only underscores the peril of this split.

As the philosopher and theologian Hans Jonas observed a half century ago, modern societies stand astride a perilous breach between innovation and wisdom; between power and responsibility. If we do not act quickly to bridge it, AI could make that widening breach an insurmountable divide.

Responsible AI to the rescue?

In recognition of that need, a conversation started decades ago about how to build and deploy artificially intelligent systems responsibly. While 20th century science fiction authors like Isaac Asimov inspired a legion of speculative forays into the topic, computer scientists like Joseph Weizenbaum started a wider, scientifically grounded conversation about human responsibilities for AI design choices in the 1970s. In 1979, philosopher and computer ethics pioneer James Moor first posed the question, “are there decisions computers should never make?”

But the past decade’s commercial boom in AI research turned work on the ethical questions posed by AI from a fascinating research niche to an urgent public priority, one that governments, civil society and even large tech companies had to act upon. Investments in AI ethics and policy skyrocketed, creating new research fields such as machine learning fairness, sustainable AI, and AI safety. In these new interdisciplinary fields, computer scientists, social scientists, philosophers, designers and engineers are beginning at last to learn from and with one another what responsible AI might mean.

The term ‘responsible AI’ has no single agreed-upon meaning; it names the many diverse demands for AI technology that is morally and politically legitimate as a source of social power. Yet the quest for ‘responsible AI’ has been a bumpy ride. From accusations of cynical ‘ethics-washing’ by Big Tech (exacerbated by recent high-profile corporate layoffs of AI ethics and safety teams), to the struggle to craft regulations for a technology that seems to acquire new capabilities and uses every month, to the difficulty of translating academic research in responsible AI into effective design and policy choices, there are high barriers to embedding this knowledge across our innovation ecosystem, and a great deal of work remains to be done.

Responsible AI requires coordinated, cooperative and sustained collaboration between academia, industry, government and civil society. It requires the voices and expertise of engineers and designers, creators and regulators, citizens and policymakers, corporations and entrepreneurs. Above all, it requires knowledge held by the communities and publics whose immediate, lived experience of AI’s impact is often neglected until it is too late.

Perhaps the greatest challenge for a healthy and responsible AI ecosystem in the UK, and elsewhere, are the lingering divides between these sectors, communities, and knowledge domains. Clashing vocabularies, methodologies, incentives, norms and cultures of practice all too often block the successful embedding and adoption of responsible AI knowledge. How do we weave these into a single harmonious ecosystem aligned with the public interest?

BRAID

Last year, the UKRI’s AHRC announced a new £8.5 million investment in enabling a responsible AI ecosystem. The first programme of its scale in the UK, its ambition is to see researchers collaborate with industry and policymakers to tackle some of the of the biggest ethical questions posed by AI, building public trust and ensuring the UK remains at the global forefront of the research, development and deployment of AI. The programme also seeks to leverage the unique strengths of the UK’s vibrant arts and humanities to bridge the growing divide between technology’s power and human responsibility.

Broken bridge

Credit: Paul-Briden, iStock, Getty Images Plus via Getty Images

We are thrilled to launch the BRAID programme to meet this challenge. Delivered in partnership with the Ada Lovelace Institute, and with the help of researchers from the BBC, our interdisciplinary team at the University of Edinburgh will work with academics, policymakers and regulators, civil society, industry and publics across the UK nations to identify and begin to lower the barriers to an AI ecosystem that is responsible, ethical and accountable by default.

We seek to do this in several ways. First, our own team bridges multiple divides: it is co-led by a philosopher and a human-computer interaction and design researcher, both with long records of engagement on responsible AI with industry and the public sector. Together we will supervise an innovative landscape study of the responsible AI ecosystem, looking at its recent past, enduring challenges, and what is needed to grow and sustain it.

Our co-investigators include experts in machine learning, law, creative arts, social sciences and journalism. Together, they will lead thematic explorations of the potential of arts and humanities to enrich the capability of technical and scientific disciplines to realise more humane, inspired, equitable, and resilient forms of innovation. These explorations will seek ways to reweave the threads of creative insight and humane wisdom into the fabric of AI.

Working closely with policy experts at the Ada Lovelace Institute and BBC researchers, our programme also aims to create more durable infrastructure for the effective translation and co-construction of responsible AI knowledge, so that it can flow more easily across the boundaries between disciplines, sectors and communities. As part of this, we will host public events and online fora that invite new voices from the arts, humanities and civil society to co-shape, interrogate and enrich emerging visions of responsible AI.

Another pillar of our work focuses on embedding and adoption of responsible AI in policy, regulation and industry, including small-to-medium enterprises that have until now largely been neglected by the focus on well-resourced and powerful tech companies. As part of this, AHRC have announced a funding opportunity for BRAID scoping projects that lay the groundwork for demonstrating the embedding and testing of responsible AI tools and approaches in real-world settings and conditions.

These projects will help organisations better understand, manage, and responsibly govern AI risks, by enabling new and innovative approaches to responsible AI to be piloted and evaluated in context. A second funding opportunity later this year will announce BRAID fellowships that enable responsible AI researchers to be hosted across academic and non-academic organisations. The ambition of these funding opportunities is to demonstrate the power of embedding responsible, human-centred approaches and thinking across the AI ecosystem.

The final pillar of our work focuses on a key obstacle to earning public trust in AI: the lack of accountability and answerability to those most impacted by AI’s transformations of society. Working directly with policy and regulatory actors in the UK’s AI ecosystem, and building on research from the Ada Lovelace Institute exploring public attitudes on AI, we will conduct research ‘deep dives’ on the lingering barriers to making the new social power of AI fully accountable and legitimate in the eyes of impacted publics and communities.

Weaving our shared future with AI

We are tremendously excited about the road ahead for the BRAID programme, and our new website will soon highlight more opportunities for you to learn or contribute to our programme, no matter what part of the AI ecosystem you occupy. The expansive scope of AI’s new social power means that we now all have a rightful place in this ecosystem, and that place must include a voice and a say. To find out more about these future opportunities, please visit the BRAID website and register your interest by joining our mailing list.

Someone using an old fashioned loom

Credit: itotoyu, iStock, Getty Images Plus, via Getty Images

To make humane futures with AI a reality will demand hard work and collaboration on responsible AI across a vast network of stakeholders and communities that need sturdy bridges built between them. This requires a kind of weaving or braiding: of different values and visions, of the divided threads of conversations and communities.

It’s appropriate then, that the BRAID programme is dedicated to this weaving. As revealed in a recent blog post by Stefanie Hessler published by our partners at the Ada Lovelace Institute, weaving was the creative practice that sparked 19th century computing pioneer Ada Lovelace’s recognition of the link between the mechanisms of a Jacquard loom, and the mechanisms of writing binary code.

As Lovelace observed in her notes on Babbage’s famed Analytical Engine: “[it] weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.” Her ability to braid her understanding of an aesthetic craft into a new, seemingly disconnected thread – her analysis of the first mechanical computer – was a first step toward the software revolution that is bearing such remarkable fruits today. There are many more creative weavings and braidings to be done to create a living ecosystem of responsible AI knowledge and practice, and we can’t wait to begin the work.

Follow us on Twitter at @Braid_UK and join our mailing list by visiting the BRAID website.

The BRAID Scoping to Embed Responsible AI in Context funding opportunity opens on 29 June.

Top image:  Credit: papparaffie, iStock, Getty Images Plus via Getty Images

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services.