ILaw Logo blue text, transparent background
AboutpeopleexpertiseNewsTestimonialsCareersContact
ILaw Logo blue text, transparent background

New Government? New AI Regulation?

September 3, 2024
Insight

With Rishi Sunak out and Keir Starmer in, the regulatory landscape for Artificial Intelligence (AI) in the UK is shifting yet again. What is the new Labour Government’s emerging strategy on AI regulation, and what potential influences could shape its future direction?

Labour's Initial Proposals

The new Labour Government has signalled its intention to introduce an "overarching regulatory framework" for AI, promising a “stronger” approach than its Conservative predecessors. The previous government chose to hold back on formal AI legislation, opting instead for a set of cross-sectoral principles—like transparency and fairness—implemented by sector regulators.

In contrast, Labour aims to create a more robust framework by focusing on regulating companies that develop the most powerful AI systems. However, the specifics of this regulation remain unclear. Another notable proposal by the Government is the creation of a Regulatory Innovation Office (RIO), a new government body designed to accelerate regulatory decisions and keep pace with rapid technological advancements. Other potential commitments include giving some statutory footing to agreements between AI developers and the AI Safety Institute (a state-backed research organisation focused on advanced AI safety for the public interest), ensuring developers share critical information.

To support AI growth in the UK, the Government has also committed to removing planning barriers for new data centres, establishing a National Data Library to unify existing research programs, and offering decade-long research and development funding to support universities and startups.

Private Members Bill: Lord Holmes of Richmond

In November 2023, Lord Holmes introduced the Artificial Intelligence (Regulation) Bill (AI Bill), a private member’s bill aimed at promoting the safe and responsible use of AI while addressing emerging issues like deepfakes and misinformation.

The AI Bill successfully reached the House of Commons after its third reading in the House of Lords but stalled following the general election. Lord Holmes intends to reintroduce the Bill in the current parliamentary session.

If passed, the AI Bill would introduce several key measures:

  1. AI Authority: A new regulatory body overseeing AI regulation, assessing the competency of existing regulators, and educating businesses on AI-related issues.
  2. Regulatory Principles: A set of principles guiding AI regulation and the way businesses develop, deploy and use AI.
  3. Sandboxes: Regulatory sandboxes that allow firms to test innovative AI solutions with real consumers in a controlled environment.
  4. AI Responsible Officer: Businesses using AI would need to designate a Responsible Officer to ensure AI is used safely, ethically, and without bias.
  5. Transparency and IP Obligations: Companies training AI systems would be required to disclose third-party data and IP used and ensure consent is obtained. Clear labelling and informed consent processes would also be mandatory for AI products and services.
  6. Public Engagement: The AI Authority would be tasked with facilitating long-term public engagement on the opportunities and risks associated with AI.
  7. Offences and Fines: Provisions for creating offences and imposing penalties related to AI misuse.

The RIO’s mandate appears to be broader than the AI Authority proposed by Lord Holmes. Its primary aim is to reduce delays in regulatory decisions that prevent technology-powered products and services from reaching their intended audiences. The RIO seems intended to act as a regulator of regulators, across multiple sectors. Therefore, it is unlikely to encompass the specific functions of the AI Authority envisioned by Lord Holmes. However, a dedicated AI authority, as proposed by Lord Holmes, could complement the RIO by offering more focused expertise. In any event, the AI Bill received cross-party support, suggesting that elements of it may influence the new Government’s regulatory approach.

The European Union's AI Act

Although the UK has left the EuropeanUnion (EU), UK-based developers, importers, or distributors of AI systems that operate within the EU will still be affected by the EU’s Artificial Intelligence Act (AI Act), which was finalised in May this year.

The AI Act classifies AI systems into risk categories and imposes restrictions based on their risk level:

  • Unacceptable Risk: Prohibited systems, such as those that threaten fundamental rights or manipulate human behaviour.
  • High Risk: Systems are subject to stringent requirements, including risk mitigation, data quality, cybersecurity, and fairness. This category includes AI used in     critical infrastructure, such as energy and medical devices.
  • Limited Risk: Systems are required to disclose their AI nature to users, particularly those interacting directly with individuals.
  • Minimal Risk: Systems with no restrictions, although they may opt to follow voluntary codes of conduct.

The AI Act may influence the UK’s regulatory trajectory. As the new Government seeks closer trade relationships with the EU, aligning UK and EU regulations could become a priority. The "Brussels Effect," where EU regulations set a global standard, could also prompt the UK to adopt similar AI regulatory measures. 

The Labour Government’s AI regulation strategy is still evolving, with a stronger framework and new regulatory bodies on the horizon. However, the specifics remain uncertain. Influences such asLord Holmes' AI Bill and the EU’s AI Act may shape the direction of UK policy.As the UK potentially rebuilds closer ties with the EU, businesses in the AI sector should stay vigilant, monitoring both UK and EU developments to anticipate future regulatory shifts.

Links

Click here to read our next article:
New Government? New AI Regulation?

About the author

Share

Latest News

More from