Navigating a nebulous AI governance landscape
There is no doubt that the fast-evolving landscape of artificial intelligence (AI) regulation has rapidly become a major concern for governments, businesses, and consumers across the globe.
The UK government has itself just published its highly anticipated response to the AI Regulation White Paper consultation. Some key takeaways include:
- The government is investing over £100 million to support regulators and advance research and innovation on AI, with a focus on agility, targeted regulation, and sector-specific approaches.
- Key regulators have been asked to publish plans by the end of April for how they will address AI risks and opportunities in their sectors, demonstrating a commitment to transparency and proactive regulation.
- The government’s response emphasises the importance of responsible and flexible regulation to support the UK’s global leadership in AI innovation, while effectively addressing emerging risks.
What has been achieved so far?
The global regulatory landscape is constantly shifting. The publication of the National AI Strategy in September 2021 marked a significant milestone, while the EU AI Act, unveiled in December, has set the stage for comprehensive standards and compliance efforts. The UK, through initiatives like the AI Safety Summit and ongoing engagement in global conversations, continues to play a major role in shaping international standards.
These recent developments demonstrate a proactive stance towards nurturing innovation while ensuring ethical AI practices. This can be a challenge for small businesses, as adhering to complex AI standards can require expert knowledge and substantial investments. However, initiatives like the Innovate UK BridgeAI programme will play a key role in addressing this need and reducing costs.
The UK boasts a vibrant AI economy, particularly in the start-up sector. I recently spoke to Dr Matilda Rhode, Lead on AI and Cybersecurity at the British Standards Institution (BSI), the UK’s national standards body. She raised an important point:
The challenges faced by smaller organisations in navigating AI standards and regulations cannot be understated, and addressing these challenges is crucial for democratising access to AI resources.
The AI Standards Hub – led by The Alan Turing Institute, BSI, and the National Physical Laboratory (NPL), and supported by the UK Government – is a collaborative initiative that aims to do just this. It provides a platform for expert support and knowledge-sharing on all things related to AI standards, as well as practical tools and educational materials for businesses. By sharing resources and providing support tailored to their needs, small businesses can be empowered to transition to the next stage of development and confidently interface with larger organisations.
Matilda also pointed out that businesses often find themselves grappling with the interpretation of incoming regulations that have yet to be published, leading to challenges in aligning with evolving standards. “In the rush to develop AI quickly, there’s a risk of knee-jerk reactions such as bans on technology or deploying solutions without adequate safety measures. A more nuanced approach is essential to navigate these complexities responsibly and sustainably.”
The Hub’s AI Standards Database is a crucial step towards making standards more accessible – it offers a searchable catalogue of standards being developed or published, acting as an ‘observatory’ for AI standards.
Why are AI standards crucial for businesses?
Standards are not just regulatory hurdles but vital tools for risk management and future-proofing. They offer clarity for businesses adapting to regulatory changes and play an underrated role in building trust, particularly in cross-border activities.
Some of the biggest challenges facing businesses and governments around AI lie in upskilling the workforce, promoting awareness of existing and developing regulation, and ensuring transparent data practices. Sector-specific risks will also require vastly differing preventative measures guided by robust, relevant standards.
As we navigate the AI governance landscape, standards are essential ‘scaffolding’ on which businesses can build AI that is trustworthy and future-forward. Matilda summed this up aptly in our conversation: “Standards capture the experience and knowledge of experts, offering businesses a framework to navigate challenges and demonstrate their commitment to responsible AI.”
Small businesses, in particular, stand to benefit from initiatives like BridgeAI, which foster collaboration, simplify complexities, and open new doors to AI resources. Thoughtful governance anchored in standards will be the guiding force for a fair, transparent, and innovative AI ecosystem.
Find out how BridgeAI can support your business in navigating the AI governance landscape here.
Author: Sara El-Hanfy, Head of Artificial Intelligence and Machine Learning at Innovate UK
Related programme
BridgeAI
Empowering UK organisations to harness the power of AI through support and funding, bridging the AI divide for a more productive UK.