New UK AI (Regulation) Bill – Potential step forward to the statutory regulation of AI systems in the UK

Hogan Lovells
Contact

Hogan Lovells

Amidst the backdrop of the recent release of the close-to-final EU AI Act and the UK government’s response to the AI White Paper, discussions regarding the need of AI regulations in the UK are gaining momentum. The timely introduction of the Artificial Intelligence (Regulation) Bill by Lord Holmes on 22 November 2023 marks a crucial step forward. This article looks into the rationale and key aspects of the Bill.


Background to the Artificial Intelligence (Regulation) Bill

The Artificial Intelligence (Regulation) Bill (the “Bill”) was introduced as a Private Members’ Bill by Lord Holmes of Richmond in the House of Lords on 22 November 2023. The primary purpose of the Bill is to establish a framework for the regulation of AI in the UK. This involves putting AI regulatory principles on a statutory footing, and establishing a central AI Authority responsible for overseeing the regulatory approach to AI.

Commenting on the rationale of the Bill, Lord Holmes provided his views in Hogan Lovells’ Digital Transformation: The Influencers Podcast (link to the podcast here):

We need to legislate… in a way which is entirely possible, in fact necessary, to hold consumer protection, citizen rights and pro innovation, all in the same hand. It's essential that any legislation or any regulation is built on those key principles.”

In addition to the social risks for citizens, Lord Holmes also stressed the importance of creating a clear framework for the recognition and protection of intellectual property rights that arise in the use of AI systems:

“Work is urgently needed and it can't be that we take a wait-and-see approach here, because if we wait and see, it will be desperately difficult to try and reassert those rights retrospectively. Wait-and-see, for me, is never the way to achieve optimal outcomes. We need to lead, and IP and copyright is but one very clear example of why we need to lead and why we need to lead right now.”  

Originating in either the House of Commons or the House of Lords, Private Members’ Bills provide a distinct avenue for individual legislators to advocate for novel legislative proposals. Private Members’ Bills can serve a valuable legislative process by stimulating meaningful discourse on emerging important issues and potentially influencing society at large. Private Members’ Bills that capture public imagination have successfully become law, and there are various precedents for Private Members’ Bills receiving the Royal Assent, as seen with the Computer Misuse Act 1990 and the Tobacco Advertising and Promotion Act 2002. With the recent release of the close-to-final EU AI Act and the UK government’s response to the AI White Paper, public discussions regarding the need of AI regulations in the UK are gaining momentum. The strong alignment with current priorities, coupled with growing cross-party support for AI regulation in the UK, has bolstered hopes for the Bill’s potential passage. 


Key Elements of the Bill

The Bill offers valuable insights into regulating the development and usage of AI systems in the UK. It may serve as a useful framework if the government chooses to adopt a statutory AI regulation. Some key aspects covered under the Bill include:

  • The definition of AI;
  • Regulatory principles;
  • The establishment of AI Authority;
  • The appointment of AI Responsible Officers;
  • Requirements on transparency, IP obligations and labelling; and
  • Public engagement.

Definition of AI

AI is defined broadly in the Bill as “technology enabling the programming or training of a device or software to—

(a) perceive environments through the use of data;

(b) interpret data using automated processing designed to approximate cognitive abilities; and

(c) make recommendations, predictions or decisions;

with a view to achieving a specific objective.”

The Bill’s definition of AI is comprehensive, covering data-driven decision-making. The broad scope ensures the framework remains adaptable to future advancements in the field.

Similar to the EU AI Act, the definition of AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they are trained.


Regulatory Principles

The Bill requires the AI Authority to have regard to the principles that—

  1. regulation of AI should deliver:
    1. safety, security and robustness;
    2. appropriate transparency and explainability;
    3. fairness;
    4. accountability and governance;
    5. contestability and redress;

*These principles can be amended by the Secretary of State by regulations.

  1. any business which develops, deploys or uses AI should:
    1. be transparent about it;
    2. test it thoroughly and transparently;
    3. comply with applicable laws, including in relation to data protection, privacy and intellectual property;
  2. AI and its applications should:
    1. comply with equalities legislation;
    2. be inclusive by design;
    3. be designed so as neither to discriminate unlawfully among individuals nor, so far as reasonably practicable, to perpetuate unlawful discrimination arising in input data;
    4. meet the needs of those from lower socio-economic groups, older people and disabled people;
    5. generate data that are findable, accessible, interoperable and reusable;
  3. a burden or restriction which is imposed on a person, or on the carrying on of an activity, in respect of AI should be proportionate to the benefits, taking into consideration the nature of the service or product being delivered, the nature of risk to consumers and others, whether the cost of implementation is proportionate to that level of risk and whether the burden or restriction enhances UK international competitiveness.

The Bill incorporates the five cross-sectoral principles listed in (a) above and outlined in the UK government's White Paper. It further elaborates on these principles, offering guidance for the proposed AI Authority. The focus on the principles is aligned with the recent UK government’s response to the AI White Paper, and it is crucial for building public trust in AI.  


Establishment of AI Authority

The Bill empowers the Secretary of State with the right to create the AI Authority, which will have the following functions and responsibilities:

  1. ensuring that relevant regulators take account of AI;
  2. ensuring alignment of approach across relevant regulators in respect of AI;
  3. undertaking a gap analysis of regulatory responsibilities in respect of AI;
  4. coordinating a review of relevant legislation, including product safety, privacy and consumer protection, to assess its suitability to address the challenges and opportunities presented by AI;
  5. monitoring and evaluating the overall regulatory framework’s effectiveness and the implementation of the AI principles above, including the extent to which they support innovation;
  6. assessing and monitoring risks across the economy arising from AI;
  7. conducting horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends;
  8. supporting testbeds and sandbox initiatives to help AI innovators get new technologies to market;
  9. accrediting independent AI auditors;
  10. providing education and awareness to give clarity to businesses and to empower individuals to express views as part of the iteration of the framework; and
  11. promoting interoperability with international regulatory frameworks.

The proposed AI Authority plays a vital role in coordinating existing regulatory bodies, and promoting responsible AI practices. Its functions including supporting testbeds and sandbox initiatives and conducting horizon-scanning will be critical for encouraging innovation.

The Bill envisages existing regulators including the ICO and CMA to continue their roles in regulating and providing guidance on AI under their respective field.


Appointment of AI Responsible Officers

The Secretary of State, in consultation with the AI Authority, must by regulations require any business which develops, deploys or uses AI must have a designated AI officer, who will be responsible for ensuring the safe, ethical, unbiased and non-discriminatory use of AI by the business, and ensuring that data used by the business in any AI technology is unbiased.

The requirement for an AI Responsible Officer within businesses that develop, deploy, or use AI signifies a commitment to responsible AI governance. While details regarding skillsets and potential overlap with Data Protection Officers under the GDPR need clarification, this can be addressed during the refinement process.


Requirements on Transparency, IP Obligations and Labelling

The Secretary of State, in consultation with the AI Authority, must by regulations provide that:

  1. any person involved in training AI must:
  1. supply to the AI Authority a record of all third-party data and intellectual property (“IP”) used in that training; and
  2. assure the AI Authority that:
  1. they use all such data and IP by informed consent; and
  2. they comply with all applicable IP and copyright obligations;
  1. any person supplying a product or service involving AI must give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance; and
  2. any business which develops, deploys or uses AI must allow independent third parties accredited by the AI Authority to audit its processes and systems.

Regulations under this section may provide for informed consent to be express (opt-in) or implied (opt-out) and may make different provision for different cases.

The Bill emphasises transparency through clear labelling and informed consent for users of AI systems. This could be an important step forward not only to protect users, but also protect the creative industry, particularly in light of the UK government shelving a long-awaited voluntary AI copyright code setting out rules on the training of AI models using copyrighted materials such as books and music in February 2024. This clause will ensure that AI developers can only use such copyrighted materials subject to seeking informed consent and complying with all applicable IP and copyright obligations. 


Public Engagement

According to the Bill, the AI Authority must implement a programme for meaningful, long-term public engagement about the opportunities and risk presented by AI, and consult the general public and such persons as it considers appropriate as to the most effective frameworks for public engagement, having regard to international comparators.

Commenting on the importance of public engagement, Lord Holmes stated “That public engagement is absolutely critical and AI needs to be able to prove itself trustworthy. Otherwise, it won't really achieve any of the optimal benefit, but we may well be saddled with many of the potential downsides. And we've seen how to get this right. Take IVF, for example, invitro fertilization. What could be more terrifying? What could be more science fiction than bringing life into being in a laboratory test tube? Why is it now seen as a positive part of our society? Because years ago, a colleague of mine, Baroness Warnock had the Warnock Commission to do exactly this; to engage with people, to engage with their concerns, their issues, and to have that real sense of engagement around an issue, so we get to a positive societal benefit from it. That's what we need with AI. That's why, as I say, the public engagement clause is probably for my mind, the most important of all are the ones that are in the bill.”

This Bill presents a crucial opportunity and a blueprint for regulating AI systems and building trust with the safe usage, development and deployment of AI systems in the UK. By fostering responsible AI development, the UK government can unlock the potential of AI technology, while mitigating potential risks.


Next Steps

The Bill has completed its first reading in the House of Lords in November 2023, and proceed to the second reading, scheduled on 22 March 2024. There will be subsequent committee and report stages prior to the third reading. Following the completion of the stages in the House of Lords, all those stages will be repeated at the House of Commons, as long as the Parliamentary time allows with the cross-party support.

If you are interested to listen to the full conversation with Lord Holmes about the Bill, please click the link here, and you can also learn more about the Bill and Lord Holmes's work by visiting his website, blog or on LinkedIn @LordChrisHolmes.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide