This funding opportunity aims to support projects that address strategic themes that underpin responsible AI. It aims to grow the network of researchers engaged in these topics and complement other key aligned programmes funded by UKRI (for example, ATI, Ada, AI Hubs, Centres for Doctoral Training) and AI institutes funded by third parties.
Your proposed keystone project will form the pillars of Responsible AI UK and will look to deliver flexible funding to bring together the best talent to address multi and interdisciplinary challenges posed by responsible AI.
We particularly encourage proposals that fit the Responsible AI UK vision:
- help enable responsible AI to power benefits for everyday life
- work in collaboration with researchers, industry professionals, policymakers and stakeholders to drive an ecosystem that will be responsive to the needs of society
Before publishing the full funding opportunity, Responsible AI UK will refine the topics to be covered by the keystone projects based on the series of roundtables that have been run at the townhall events in London, Cardiff, Belfast and Glasgow.
Further consultation with the AI ecosystem will take place through a series of roundtables, and feedback from members of the Responsible AI UK strategy group and other key members of the UK AI community.
To date the following overarching themes have emerged as part of these consultations and we will look to refine these over the coming weeks, including target specific areas where UK research can make the most impact. We expect all projects to consider both the technology development that embeds trust and responsibility and how this can deliver social benefits through technology implementation, regulation and education.
Responsible AI-powered organisations
Organisations are rapidly evolving new ways of working in response to AI. This has wide-ranging implications for organisations and their employees, and the end-users of the products and services offered by these organisations.
Responsible AI practices need to include systems-based approaches, going beyond testing of individual AI algorithms, to evaluating the wider implications of AI deployment into such complex human and AI collaborative systems.
This includes ensuring that innovators understand the overall consequences and measures, such as reskilling and upskilling that will need to be developed for the uptake of AI to be successful. What are the responsible AI principles and sector-specific approaches to AI innovations that are safe, trustworthy, and provide confidence to organisations, customers, stakeholders, and the wider society?
Addressing harms, maximising benefits of AI
How AI is framed and applied introduces new opportunities but also trade-offs for individuals, society, and industry where the purported benefit may be outweighed by negative impacts on a wide range of issues. This includes issues such as:
- labour rights
- social justice
- sustainability (of people, organisations, environment)
Many of these have direct legal ramifications irrespective of future specific AI regulation.
Moreover, AI solutions often do not generalise to previously unseen settings, exacerbating uncertainties as to whether existing successes translate to new domains, sectors, cultural and global contexts. This introduces the need for deployment, validation, provenance, and auditing regimes for AI, so decision makers can thoroughly understand and manage the limitations of AI systems to ensure they are safe, ethical, and beneficial, while simultaneously highlighting where further development is needed to extend their scope.
AI law, governance, and regulation in an international context
The UK will need to meet the challenges of when and how to govern and regulate AI within the international digital economy. In the National AI Strategy, the UK government emphasises the desire to encourage startups and small and medium-sized enterprises to adopt AI while acknowledging the increasing need to meet AI regulation. This is set against currently different approaches taken by the EU, China, the US, and the Global South, a global debate to which the UK must make a strong contribution. Against this background, we would seek the most effective research contributions that the UK can make to this global debate, to drive for clarity around AI regulation that promotes trust, fairness, and accountability for users, and certainty for international commerce.
Projects will need to fit within at least one of these themes.
The consortium will be from a diverse range of voices, through engaging different academic institutions, policymakers or advisory groups and commercial enterprises. The team will be reflective of the strengths represented around the UK, nationally. It is expected this group will have:
- thematic expertise reflecting the communities that will be engaged in the investment
- in-depth understanding of relevant disciplines, technologies, policy challenges and evidence needs
- an ability to engage with diverse stakeholders, including non-academics
- an ability to articulate a clear vision for engagement with communities working in this area and new partners from within the appropriate disciplines or elsewhere
- an ability to deliver complex projects to time and on budget, considering the variety of activities and outputs
- specialist expertise (academic or non-academic) to support the desired outcomes, which may include:
- sector representatives
- programme management expertise
- a communications function
- knowledge mobilisation expertise
Key features of a keystone project
Quality and ambition
A keystone project is seen as a scheme that matches best with best and allows researchers to tackle bigger, more open-ended problems, addressed through a more coherent or holistic approach.
The stability in tackling a longer-range vision helps motivate teams, provides the freedom required to take risks, and enables longer term planning.
Partnership and ecosystem development
The scale of activity is seen to create stronger links between the universities involved and greater visibility at a national and international level, leveraging the Responsible AI UK network, partners, and international connections.
The size of keystone projects allows for the assembly of the best team and collaborators, all with complementary expertise leading to the development of effective multidisciplinary and cross-disciplinary working. You will be expected to promote and champion responsible AI in your respective domains, to help grow the community across sectors and disciplines.
The duration of keystone projects allows you to invest in building effective collaborations. The scale of a keystone project should look to attract partnerships beyond the original project partners and seek greater input from the wider community, including public and private sector, resulting in more external visibility on the research direction for the area.
Keystone projects will form part the Responsible AI UK core pillars and contribute activities that will help connect and drive efficiencies within the UK AI ecosystem. You will have access to the broad network of partners brought together by Responsible AI UK and support international conversations on responsible AI using the reach that Responsible AI UK affords. You will also bring on other parts of the AI ecosystem that are currently disconnected from national conversations and research programmes. These activities should be specifically costed into the programme to allow for this flow on knowledge across organisational boundaries.
Please make it clear where the project looks to build on connections to existing networks and research programmes. Responsible AI UK is keen to understand how this funding is building new connections and developing exiting relationships within the ecosystem, providing additional value within the landscape.
The keystone mechanism provides freedom to scope new opportunities, allows you to cross-fertilise ideas and build up new skills sets. This allows you to develop new themes, and to trade ideas and resources. The stability of the grant allows early career researchers (ECRs) to express their creativity and to lead on part of the investigation.
Impact and advocacy
Keystone projects are seen to have greater visibility and recognition within the universities involved and the relevant research communities at both a national and international level, leveraging the Responsible AI UK brand. This gives you more influence than smaller scale research activities.
You can attract more visits and engagement with high quality researchers and external stakeholders, leverage other funding, and influence wider strategies. The visibility also enhances the opportunities for outreach and advocacy, promoting UK science.
We expect you to demonstrate how you will deliver or support the desired outcomes in your applications to support the UK’s transition to an AI-enabled economy, resulting in growth, prosperity and mutual benefit for sectors and citizens.
Keystone projects will be a good environment for ECRs’ longer term career development. The flexibility and longer durations allow the project lead to empower junior team members giving them greater independence through more responsibility and leadership over activities.
Postdoctoral staff gain a broader experience due to the breadth of experience and expertise in the team and there are greater opportunities for secondments, mentoring and involvement in management. This makes keystone projects an attractive employment prospect leading to higher quality recruitment. PhD students would also be expected to be aligned to keystone project teams, also benefiting from interacting with a team of broader expertise and activity.
The flexibility of the keystone projects is seen as a real strength of the scheme. This enables a more dynamic allocation of resources and a nimble approach to recruitment, and definition of the individual projects being undertaken. Specifically, integrative activities across the Responsible AI UK ecosystem are expected. Also, 20% of funding and researcher time should be reserved for such dynamically defined activities to be undertaken in collaboration with:
- Responsible AI UK
- UKRI AI investments
- keystone projects
- soon to be announced AI hubs and Centres for Doctoral Training
In addition to collaborating with the wider programme, it is anticipated that successful applicant teams will also engage with stakeholders and users of the research, who are essential to the design, conduct and impact of application-orientated research.
It is also expected that keystone projects demonstrate strong institutional support. Letters of support from participating organisations (at least from the principal investigator’s institution) should demonstrate alignment to the organisations’ strategy and ambitions and indicate contributions to the project.
While substantial contributions (cash or in-kind) are expected from project partners and participating organisations, a minimum requirement is not expected, and partnerships will instead be assessed on their relevance and alignment to the programme.
Note that all proposals submitted will be assessed equally, irrespective of which themes the proposal aligns with. However, a balanced range of projects across disciplines, sectors and themes will be funded.
If you are planning to bring international collaborators (for example, industry or academic partners), you will need to complete the Responsible AI UK’s Trusted Research section and the checklist for academia by National Protective Security Authority and National Cyber Security Centre (PDF, 115KB).
Note that based on the answers to this checklist, you may need to escalate this within your institution or department for a decision.
Management and monitoring
Keystone projects should have effective management and monitoring arrangements for the investment. This should include a risk management strategy and a strategy for how the flexibility of resources will be managed.
Responsible AI UK expects all keystone projects to establish and run an independent advisory board that will include at least one member of the Responsible AI UK leadership team, to provide advice and recommendations on the strategic scientific and research direction and activities (such as impact, advocacy and outreach) of the programme grant.
This independent advisory board must meet at least annually. This group should have at least 50% independent membership and an independent chair.
Responsible AI UK strongly encourages you to consider costing in project management and other administrative support such as employing a full-time equivalent project manager, and not relying on the project lead for these duties. Projects will be able to rely on the centralised communications, networking, and event management resources offered by the Responsible AI UK operations team for activities that look to bring together the wider ecosystem.
What this scheme is not for
We will not fund proposals that:
- do not embed responsible research and innovation and equality, diversity and inclusion considerations into the research theme itself as well as into the research practices
- are too focused on applied research and do not demonstrate significant ambitions or risk-taking
- do not build on the broader set of activities going on across the UK and international AI ecosystem
- do not have a mix of technical and non-technical disciplines engaged in joint activities
- do not demonstrate clear impact pathways
- do not demonstrate an interest in engaging with the ecosystem beyond the investigators’ own disciplines
Learn about the Responsible AI UK programme.
Up to £10 million funding is available through the Responsible AI UK programme to support up to four grants for up to 48 months (at 80% full economic cost (FEC)).
Standard UKRI eligibility rules apply, for details on who is eligible to receive funding, please refer to the ‘who can apply’ section.
We expect to fund impact activities requesting funding between £2 million and £3.5 million (at 80% FEC).
Please note that due to the nature of this funding, additional requirements on spending profile, reporting, monitoring and evaluation as well as grant extensions will apply. This will be reflected in the grant additional conditions, and those funded will need to comply with them. Further details are provided in the additional information section.
Please note that any projects funded through this funding opportunity will have a fixed start and end date, and that no slippage to this date will be permitted.
Grants will be funded at 80% of the stated FEC (except for non-academic partners). The remaining 20% must be contributed by the academic or industry partners submitting the proposal.
The grant can support any directly incurred costs, such as research staff time, consumables, travel and subsistence, and directly allocated costs, such as investigator time, and associated overheads.
You are not required to have existing collaborations or contacts within the Responsible AI UK programme.
Current Responsible AI UK investigators may not lead a project nor be costed on the grant, but they may be named either as co-investigator or project advisor depending on their contribution.
Non-capital equipment over £10,000 in value (including VAT) is only available in exceptional, well-justified circumstances. Items of equipment should be in the directly incurred – other costs heading and will need robust justification. Items over £10,000 will be especially scrutinised, and only permitted if its remit for this funding opportunity is clearly justified.
Note that any deviation from the spending profile beyond 5% on an annual basis is not allowed (any underspend will not be refunded, nor any overspend allowed). No-cost extensions cannot be allowed.