ARIA: Safeguarded AI - TA1.4 Sociotechnical Integration
ARIA seek teams from the economic, social, legal and political sciences to consider the sound socio-technical integration of Safeguarded AI systems.
Opportunity Details
When
Registration Opens
15/10/2024
Registration Closes
02/01/2025
Award
Phase 1 will be supported by a total of £3.4M across 2-6 teams.
Organisation
ARIA
ARIA’s goal for the Safeguarded AI programme is to usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks.
Sociotechnical Integration
The third solicitation for ARIA’s Safeguarded AI programme is focused on TA1.4 Sociotechnical Integration. Backed by £3.4m, we’re looking to support teams from the economic, social, legal and political sciences to consider the sound socio-technical integration of Safeguarded AI systems.
This solicitation seeks R&D Creators – individuals and teams that ARIA will fund – to work on problems that are plausibly critical to ensuring that the technologies developed a part of the programme will be used in the best interest of humanity at large, and that they are designed in a way that enables their governability through representative processes of collective deliberation and decision-making.
A few examples of the open problems we’re looking for people to work on:
- Qualitative deliberation facilitation: What tools or processes best enable representative input, collective deliberation and decision-making about safety specifications, acceptable risk thresholds, or success conditions for a given application domain? We hope to integrate these into the Safeguarded AI scaffolding.
- Quantitative bargaining solutions: What social choice mechanisms or quantitative bargaining solutions could best navigate irreconcilable differences in stakeholders’ goals, risk tolerances, and preferences, in order for Safeguarded AI systems to serve a multi-stakeholder notion of public good?
- Governability tools for society: How can we ensure that Safeguarded AI systems are governed in societally beneficial and legitimate ways?
- Governability tools for organisations: Organisations developing Safeguarded AI capabilities have the potential to create significant externalities – both risks and benefits. What set of decision-making and governance mechanisms are best to ensure that entities developing or deploying Safeguarded AI capabilities have and maintain these externalities as appropriately major factors in their decision-making?
We are also open to applications proposing other lines of work which illuminate critical socio-technical dimensions of Safeguarded AI systems, if they propose solutions to increase assurance that these systems will reliably be developed and deployed in service of humanity at large.
(Work to evaluate the societal impacts of Safeguarded AI systems is out of scope for this
solicitation, and will instead be the focus of a future funding call on TA1.4 Phase 2.)
Watch the solicitation presentation
Download the full funding call [PDF]
Who can apply?
ARIA welcome applications from across the R&D ecosystem, including individuals, universities, research institutions, small, medium and large companies, charities and public sector research organisations. Applicants from outside the UK are eligible, but usually the majority of project work is expected to be carried out in the UK.
Phase 1 will be supported by a total of £3.4M across 2-6 teams and over a period of
up to 18 months. ARIA welcome proposals for research projects which span the full 18 months of the TA1.4 period, as well as projects that will conclude sooner.
If you would like help to find a collaboration partner, contact Innovate UK Business Connect’s Robotics & AI team.