Backed by nearly £50m, ARIA’s Scaling Trust programme sits within the Trust Everything, Everywhere opportunity space and seeks to create the capability for AI agents to securely coordinate, negotiate, and verify with one another on our behalf.
AI agents are already writing code, browsing the web, managing workflows, and beginning to act in the physical world. But while individual agents grow more capable by the day, they still can’t securely coordinate with each other, especially in real world scenarios when the stakes are high, the parties are untrusted, and the environment is adversarial.
We are missing the trust infrastructure that would move them from solo operators to secure collaborators. Without it, agents remain either powerful but isolated or acting in the wild with no security at all.
This programme is building that infrastructure: open-source tools and scientific foundations for agents to securely interact across digital and physical worlds, combined with a live adversarial arena to stress-test everything we build.
At the heart of the Scaling Trust programme will be the Scaling Trust Arena: a platform for open competitions designed to test AI systems’ capabilities in multi-agent coordination across digital and physical worlds, with a multi-million pound prize pool for the strongest teams.
In Phase 1 of this programme, we are seeking to fund teams to develop open-source tools for future Arena participants and perform fundamental research that moves us from empirical to theory-driven guarantees in agentic coordination.
In advance of launching the Scaling Trust Arena, we are looking to fund teams across the following programme tracks:
Track 2 | Tooling: Open-source agents and reusable components that enable secure requirement capture, negotiation, protocol generation, and verification in multi-agent settings. Must be usable by all Arena participants, built for adversarial environments, and designed to generalise beyond single tasks.
Track 3 | Fundamental research: Foundational work that turns empirical security into provable guarantees, and unlocks new cyber-physical trust primitives for agents. Focus areas include formal AI security, generative protocol design and verification, and cyber-physical trust anchors.
(Track 1: Arena will be funded via a separate call.)
Full details of what is in and out of scope for each area can be found in the call for proposals.
-
We invite applications from interdisciplinary teams bridging fields like cryptography, AI security engineering, game theory, robotics and more. We welcome applications from those at universities, research institutes, startups, and established companies, as well as from individuals. We encourage collaborative teams, but solo applicants are also invited to apply. Applicants can be based in the UK or abroad.
Join a team
For those seeking specific expertise to support their proposal, ARIA have created a teaming request form to facilitate finding potential team members who have registered their interest in this programme. After a quick registration you will gain access to a list of other individuals seeking to find/share their expertise and a dedicated teaming channel on our community Discord.
-
Submission deadline: 24 March 2026 (14:00 GMT)
Grant size: £100k–£3m
Grant duration: 3 months–1 year (with potential for renewal)
-
Description: Open-source tooling that will provide the baseline infrastructure usable by all in the Arena and beyond
Goals:
- Create basic agents to build on top of to participate in the arena
- Build specialised components that can be utilised by many agents
- Explore a diverse set of agent design strategies that can lead to production-ready/grade implementations
- start the adoption phase for these technologies.
Sub-tracks
- 2.1 Agents – agents that can be used as ‘participants’ in the Arena, composed of a set of components
- 2.2 Components – a specific tool that any agent can use
- 2.3 Adoption – production software, integration and pilot efforts.
Note: In this first call for proposals, we are only concerned with 2.1 Agent and 2.2 Components. 2.3 Adoption will come in a later solicitation (expected mid-2027).
Project size £200k to £2m per project, project length 3 months to 1 year. Expected no. of teams: 4-6.
For full details, read the call for proposals.
-
Description: Theory that moves us from empirical to theory-driven guarantees and helps us design new security primitives that can aid agentic coordination.
Goals:
- Bring scientific confidence to the trustworthiness of developed agents
- Provide a scientific framework for formal AI security and generative security
- Design new security primitives for agents to securely interact with the real world
Sub-Tracks
- 3.1 Formal AI Security: Formalisation of agentic adversaries and new security settings.
- 3.2 Cyber-Physical Primitives: Security primitives that can aid cyber-physical agentic coordination
- 3.3 Foundations of Generative Security: Automated protocol generation and verification.
- 3.4 Bluesky: Open-ended research.
Project Size £100k to £3m, per project of 6-18 months. Expected Team size: 3 research centres (£2-3m each) + 4 – 12 smaller teams
For full details, read the call for proposals.
-
ARIA will run multiple webinars which provide an overview of the programme’s objectives, scope, and application process, and to give potential applicants an opportunity to ask questions to the ARIA team – please register your interest and submit questions in advance for these events.
Webinar 1: 17 February 2026 (15:30 GMT) – register here