The global race to regulate AI

Casetext
Contact

Where does the US stand?

Policymakers worldwide scrambled to keep up with AI’s rapid development in 2023 and will keep doing so in 2024. Though building the frameworks necessary to support pushed-for regulation has been a slow process, lawmakers did make significant strides with the EU passing the Artificial Intelligence Act in December 2023. 

The AI Act, heralded as the world’s first comprehensive framework for reigning in AI use, brought to light some of the reasons why regulating the technology is slow-going. Not only does AI evolve at a dizzying speed, but lawmakers also need education on the complex technology before they could create laws to control its use.

And the need to truly understand—and define—AI is critical, because those standards and definitions have legal implications. Ultimately, how AI is defined will determine how, and to what extent, it’s regulated. 

The importance of defining AI was underscored when the EU, in developing the AI Act, acknowledged that although the scientific community has yet to adopt a universal definition of AI, the determination of what constitutes an AI system is “crucial for the allocation of legal responsibilities.” The EU did agree on a definition of AI, but only after “endless regulatory discussions.” 

Another challenge for lawmakers is determining who should regulate AI, and how much. Regulatory bodies worldwide have adopted different approaches to creating laws around AI. This means laws will diverge not only on AI use, but the impact of that use on key issues, such as data privacy. Here we’ll compare the distinct approaches the EU and UK are taking to AI regulation, and where the US stands. 

The EU AI Act: A human-centered, risk-based approach

The AI Act takes a pyramid, risk-based approach to regulating AI, breaking AI into four categories based on the level of risk certain use cases present to people’s safety and fundamental rights: unacceptable risk, high risk, limited risk, and low and minimal risk. 

In the Act’s risk pyramid, greater risk warrants more stringent regulation. The Act bans the use of AI that presents an unacceptable risk anywhere in the EU’s 27 nations, such as AI used for untargeted scraping of facial images from the internet. High-risk AI systems—with use cases in areas such as education, safety, and law enforcement—are permitted but highly regulated and subject to compliance with specific risk management requirements. 

AI presenting limited risk—which includes chatbots and AI systems that generate or manipulate image, audio, or video content—are subject to a limited set of transparency obligations, while low or minimal risk AI can be used across the EU without additional legal obligations. 

The AI act is a sweeping piece of legislation, one that seeks to impose specific legislative obligations throughout an AI system’s life cycle on both providers and deployers. In addition to enumerating several specific requirements and fines for violations of the Act, the law requires member states to designate a national authority to enforce the legislation. It also calls for a centralized European Artificial Intelligence Office to coordinate enforcement efforts.

The UK approach: Pro-innovation guidance for regulators

The UK’s self-described “pro-innovation” approach to regulating AI is set forth in a white paper published in March 2023 by the government’s Department for Science, Innovation and Technology. The paper states certain proposals for regulating AI, but doesn’t actually set forth any specific legislation, and it doesn’t even define AI.  

Additionally, instead of centralizing AI governance under one new regulator, the UK government encourages existing regulators to develop their own approaches for regulating AI within their sectors. The paper does include five principles to guide regulators in their approach to AI risks: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Despite taking a wait-and-see approach to regulating AI, the UK government did host the first global AI Safety Summit in November 2023, an event attended by the US and 26 other governments, as well as several technology companies, global organizations, and high-profile leaders in tech, such as OpenAI’s Sam Altman. 

One of the biggest results of the Summit was that AI companies agreed to give governments early access to their models to perform safety evaluations. Additionally, the Bletchley Declaration, signed by all countries in attendance, confirmed their agreement to collaborate on mitigating risks arising from AI systems. 

The UK also launched the AI Safety Institute following the Summit. The government has said the Institute is not a regulator, and that its mission is to perform research to advance AI safety for the public interest and “minimize surprise to the UK” from rapid and unexpected advances in AI.

EU vs UK, and where the US falls

The EU and UK approaches to regulating AI are markedly different from one another, with the EU adopting a prescriptive, comprehensive framework with specific requirements and designation of enforcement and regulatory bodies. The UK, on the other hand, has yet to propose legislation, as they seek to remain flexible and avoid potentially stifling innovation.  

So how does the US approach measure up? While the government has yet to pass specific legislation, there have been over three dozen congressional hearings on AI and 50 bills on AI introduced to both houses. The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, issued in October 2023, also marked a significant step important toward US regulation of the development and application of AI.

Similar to the UK’s white paper, the Executive Order directs agencies to consider several factors when considering new AI regulation and includes the Blueprint for an AI Bill of Rights to guide society in the design, use, and deployment of automated systems. 

But the Order also established several initiatives, including creation of the US AI Safety Institute to develop guidelines for regulators to conduct risk evaluations of AI systems. And, much like the EU’s regulations, it requires AI companies to make certain disclosures, including notifying the government when training potentially dangerous models. 

The US approach appears to be influenced by both frameworks. The government has adopted a risk-focused approach similar to that of the EU, and wants an institute to develop guidance for regulators. On the other hand, it also calls for individual federal agencies to use that guidance during rulemaking and enforcement on issues, indicating a more agency-focused approach.   

Approaches to regulating AI have varied greatly among nations. While regulators will undoubtedly continue to play catch-up this year, given how rapidly AI continues to develop, we can expect to see more proposed legislation from nations worldwide, including the US, and even the UK. 

Written by:

Casetext
Contact
more
less

Casetext on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide