Technology policy’s next big challenge: Divergent approaches to regulating AI

Hogan Lovells

The rapid growth of AI and its ramifications for nearly all aspects of society and the economy is placing increasing pressure on the U.S. and European governments to proactively set regulations and guardrails for this nascent technological revolution.  While AI regulation is only in its infancy, we are already seeing divergent regulatory approaches on either side of the Atlantic. What is clear, however, is that AI policy will be a hotly debated issue in the halls of government for years to come.

USA

Even in a bitterly divided Washington, Congress and the Biden Administration are keenly focused on the growth of AI. Over the coming months and years, we expect this focus to grow and AI policymaking efforts to intensify. 

Congress: Bipartisan Hopes, Partisan Pitfalls

AI is one of the current “shiny objects” in Congress, raising the attention of many Members of Congress who see the regulation of AI as a potentially bipartisan issue. In the first months of the new Congress, both the House and Senate have already held several hearings and introduced legislation on various aspects of AI regulation. Further, in April, Senate Majority Leader Chuck Schumer (D-NY) announced an intensive effort to examine and understand AI technology and develop a legislative framework. 

As the U.S. Congress begins to consider fundamental policy issues in AI regulation, there are several overarching, sometimes conflicting dynamics in play. First, lawmakers are intent on ensuring that China does not outpace U.S. innovation.  China’s rising geopolitical and economic power is one of the only bipartisan issues in a bitterly partisan Congress, and we expect it to color the debate over AI regulation.  Second, Congress has been highly focused on the tech sector in recent years, including data privacy, competition, and children’s online safety. Some members are now also looking at the intersection of some of these issues with AI. 

That said, there is far from any bipartisan consensus on how to regulate AI or even what specific issues to address. Some Democratic lawmakers have expressed concern about the potential for AI to perpetuate bias, misinformation, and discrimination.  Meanwhile, some Republicans see AI as an area for economic growth and cost savings, but they are also concerned that AI might could out conservative thoughts. Partisan politics in

the run-up to the 2024 electioncould also overwhelm Congress’s attention. In short, we expect much congressional interest and activity on AI over the coming years, but it remains to be seen how any proposals will progress.


Biden Administration:

At the same time, the Biden Administration has been steadily outlining its proposed framework for regulating AI across industries, including:

  • The White House Office of Science & Technology (OSTP) Blueprint for an AI Bill of Rights.  In October 2022, the OSTP published the Blueprint for an AI Bill of Rights laying out a comprehensive, but nonbinding, set of principles for the development and use of AI.  The Blueprint focuses on five core guidelines: (i) safe and effective systems; (ii) algorithmic discrimination protections; (iii) data privacy; (iv) notice and explanation; and (v) human alternatives, consideration, and fallbacks.  The Blueprint will serve as the backbone of future policy guidance and regulations that the Biden Administration will propose.  Following the Blueprint, in May 2023, the Biden Administration announced a series of new policies and efforts to promote responsible AI development following a meeting with CEOs from Alphabet, Anthropic, Microsoft, and OpenAI.
  • In addition to the Blueprint, various federal agencies have released their own policies and guidance on the use of AI, including the Department of Defense’s Ethical Principles for Artificial Intelligence
  • To help guide the Administration’s work on AI, the U.S. Department of Commerce created the National AI Advisory Committee (NAIAC), made up of close to 30 leading tech experts.
  • Broadly, all these efforts share a core philosophy of trying to promote innovation and ensure the United States is at the forefront of AI development – while providing protections for the significant privacy, civil rights, and trustworthiness challenges this emerging technology presents. 

Europe

The explosion in the use and application of generative AI since late 2022 has put its regulation firmly at the center of the policy agenda across Europe.  Much has been said about the vast potential for AI to change every facet of ordinary life, but some have also expressed concerns about potential harms, including bias, inaccuracy, and infringement of rights.

It is no surprise then that AI is featuring among Europe’s most prominent policy debates.  The main policy driver in the UK appears to be promoting investment in the safe development of AI capabilities while also leveraging post-Brexit regulatory flexibility to position the UK as a regional AI powerhouse.  The government calls this its “pro-innovation” approach, with the Prime Minister recently launching a £100m Taskforce, modeled on the UK COVID-19 Vaccines Taskforce, to assist the development of UK-based AI technology and advise on AI policy. Meanwhile, the focus in the EU is on being at the forefront of a new sphere of technology regulation and setting a global standard.  Political agreement was recently reached in the European Parliament on a version of the EU’s flagship comprehensive AI legislation, the AI Act. Last-minute amendments were secured, which supplement the Act’s requirements to target the specific potential risks posed by generative AI and chatbots by ensuring that such systems can only be designed and developed in accordance with fundamental rights.  As explained in our previous article, regional policymakers are not taking a consistent approach to the regulation of AI:

  • The EU AI Act proposes a core cross-sectoral legislative framework that will put in place a harmonized set of standards applicable to AI applications, including more detailed and prescriptive requirements for AI systems designated as high-risk. This proposal builds upon existing EU legislation focused on individual rights, such as the GDPR but also the recently introduced regulatory regimes for digital services, the Digital Services Act and Digital Markets Act. The latter two regimes entered into force in November 2022 and impose obligations seeking to mitigate AI-related risk, such as transparency obligations concerning the use of algorithmic systems by digital services.  The parliamentary debate over the AI Act has already started to demonstrate the overlap between new and emerging EU regulatory regimes, such as late amendments in the European Parliament to the AI Act, which added recommender systems used by social media platforms to the list of high-risk AI applications. 
  • There is also a risk of competing regulatory objectives. For example, one proposed (but ultimately) unsuccessful amendment by a group of MEPs had sought to add to the list of prohibited practices in the AI Act (i.e., AI applications deemed to pose an unacceptable risk) any AI system “for the general monitoring, detection and interpretation of private content.”  This is exactly the sort of monitoring tool that would be required to comply with a separate EU legislative proposal designed to require online services to identify and remove Child Sexual Abuse Material.
  • By comparison, the UK’s policy approach to AI regulation is intended to be flexible, decentralized, and sector-specific. The UK set out its stall in a recently published White Paper, which states that its “pro-innovation” regulatory framework will focus on the context in which AI systems are deployed and lay out principles that will be applied in different contexts by expert sectoral regulators who will publish guidelines on the use of AI.  The UK’s approach does not, for now, rely on new legislation with the government saying it is keen to avoid a heavy handed approach and rushing in new laws. The Government’s Office for AI is inviting businesses to provide their views on its approach, including its cross-sectoral AI principles, by responding to a consultation open until 21 June 2023. In parallel, sectoral regulators have already begun to consult with a view to developing rules applicable to their regulatory remit.  In early May 2023, the UK’s Competition and Markets Authority launched a review of AI foundation models, including large language models and generative AI, to understand their potential impact on competition and consumer protection.  The CMA is inviting stakeholder submissions until 2 June 2023.

As demonstrated by the rapid adoption of large language models and generative AI, and the concerns voiced since, the current policy drivers are very susceptible to change as new technologies emerge and their risks become more visible to the public. 

The vast and increasing number of businesses operating in Europe that develop or use AI systems have an important role to play in informing the policy debate. Engaging with the EU legislative process before the AI Act becomes law, and with UK regulators tasked with applying the “AI principles,” will be critical to developing proportionate and workable regulation.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide