New Artificial Intelligence Laws Are Set in Motion This Year

Schwabe, Williamson & Wyatt PC
Contact

Schwabe, Williamson & Wyatt PC

Three recent actions in the U.S. and Europe seek to regulate artificial intelligence. During this period of uncertainty, business leaders using AI or seeking to leverage the technology in their organizations must stay abreast of pending regulation, prepare for future legal requirements, and take reasonable steps to address the risks of any deployment of AI.

End-of-Year Rush to Regulate AI in the European Union (EU), Canada, and California

The EU AI Act

On Friday, December 9, lawmakers in the EU struck a deal on what could represent the world’s first comprehensive law to regulate artificial intelligence. The deal sets into motion sweeping new requirements for the use of artificial intelligence that are expected to apply in early 2026.

Pressure to implement the EU’s Artificial Intelligence Act, first proposed in 2021, has been mounting due to the rise of popular generative AI tools such as ChatGPT. The deal struck by the EU Council and Parliament negotiators is expected to settle disputes among the lawmakers which could pose roadblocks for the AI Act. For example, it was reported the lawmakers were previously not aligned on the regulation of foundational AI models and national-security exceptions.

EU leaders believe this provisional deal will pave the way toward the AI Act’s approval. In the coming weeks, key technical details of the act will be drafted and undergo review. Once complete, the AI Act must be endorsed by the EU Council and Parliament to become law.

If passed, the AI Act will likely require businesses that use AI systems and are subject to EU jurisdiction to:

  • Meet transparency obligations; for example, by disclosing when content has been generated by AI, so individuals can make informed decisions about its use.
  • Develop and make available technical documentation for certain AI systems.
  • Implement governance structures and allocate compliance obligations intended to monitor and mitigate AI risks.
  • Prohibit implementation of certain uses of AI systems—namely, those most likely to result in harms.

If approved, failure to comply with the EU’s AI Act will lead to significant fines—in some cases, $35 million euro or 7% of global turnover, depending on the infringement and size of the business.

Canada’s Artificial Intelligence and Data Act (AIDA)

On November 28, Canada moved a step closer to implementing its first AI regulatory framework with the government’s publication of the full text of amendments to its draft Artificial Intelligence and Data Act (AIDA). The amendments incorporated significant feedback submitted in response to an initial legislative attempt, Bill C-27, that sought to ensure AI would be developed and deployed safely and responsibly.

The published amendments call for:

  • Greater flexibility in the definition and classification of “High-Impact Systems,” which are central to the AIDA’s key obligations.
  • Alignment with the EU AI Act, which substantially broadens the scope of AIDA and makes it more responsive to future technological changes.
  • Clearer responsibilities for and higher accountability of persons who develop, manage, and release high-impact systems.
  • Specific obligations on the part of generative AI systems that would not be categorized as “high-impact systems.”
  • Greater clarity on the defined role of the AI & Data Commissioner.

The AIDA provides for robust enforcement and penalties, which would include administrative monetary penalties and the prosecution of regulatory and criminal offences.

California’s Draft AI-related Rules under the CCPA

On November 27, the California Privacy Protection Agency released a much-anticipated first draft of its rulemaking on automated decision-making technologies (ADMT) under the California Consumer Privacy Act as amended by the California Privacy Rights Act (CCPA). The draft aims to provide consumers with key protections when businesses use ADMT, which it broadly defines as “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision making.” The publication of this draft sets into motion the most consequential artificial intelligence law in the U.S., with formal rulemaking procedures expected to start in early 2024.

As drafted, the rules would require businesses that are subject to the agency to enable consumers to make informed decisions about ADMT by:

  • Providing “Pre-use Notices” to inform consumers about how a company intends to use ADMT and how to exercise their ADMT-related rights.
  • Giving consumers the ability to opt out of ADMT, with very limited exceptions.
  • Enabling consumers to obtain additional, detailed information about ADMT, such as information about the company’s ADMT logic, parameters, and outputs.

As part of the draft rules, the California Privacy Protection Agency led discussions of key topics such as whether the ADMT rules should apply to the profiling of consumers for behavioral advertising; additional restrictions on profiling children; and the use of consumers’ personal information to train ADMT. Such discussions could have significant effects for online advertising and the use of data-scraping techniques in the development of AI.

Failure to comply with the agency’s rules could result in uncapped fines of up to $2,500 per violation or $7,500 per intentional violation, which can lead to substantial financial penalties. For example, in August 2022, Sephora agreed to pay $1.2 million to resolve allegations that it violated the CCPA in its use of cookies on its website; and in September 2023, Google agreed to pay $93 million to resolve allegations that involved the collection, storage, and use of location data for profiling and advertising purposes without consent.

Lawmakers seek to leverage existing rules to regulate AI

While the EU AI Act is said to become the world’s first comprehensive law intended to specifically regulate AI, many regulators have stated their intent to leverage existing laws to take action against illegal business practices involving AI.

In the US, the Federal Trade Commission (FTC) has voiced its view repeatedly that it has the authority, expertise, and the force of existing laws to hold businesses accountable for abuses and harms caused by their use of AI. In November, the FTC noted of AI,

“Although AI-based technology development is moving swiftly, the FTC has decades of experience applying its authority to new and rapidly developing technologies. Vigorously enforcing the laws over which the FTC has enforcement authority in AI-related markets will be critical to fostering competition and protecting developers and users of AI, as well as people affected by its use. Firms must not engage in deceptive or unfair acts or practices, unfair methods of competition, or other unlawful conduct that harms the public, stifles competition, or undermines the potentially far-reaching benefits of this transformative technology. As we encounter new mechanisms of violating the law, we will not hesitate to use the tools we have to protect the public.”

In November 2023, the FTC authorized a compulsory process to expedite nonpublic investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use. The FTC will leverage this process to identify uses of AI that lead to deceptive or unfair acts or practices, unfair methods of competition, or other unlawful conduct that harms the public or competition in the marketplace.

Similarly, in a whitepaper published in March 2023, the UK made clear it has no plans to adopt new legislation to regulate AI as part of its deliberate “pro-innovation” approach. Rather, the UK has stated it will rely on its existing regulators, such as the UK Information Commissioner’s Office, to use their authority to steer businesses toward the use of responsible AI in their respective areas of responsibility.

Reducing the Risks of AI during Regulatory Uncertainty

In our data-driven economy, businesses may want to embrace the use of AI responsibly to benefit from its transformational powers, despite regulatory uncertainty. Doing so is not without risk, given the dynamic legal landscape. Such risks might be lessened if businesses:

  1. Develop and maintain an AI policy that addresses:
    • The procurement and use of third-party AI tools and systems.
    • The development and use of in-house, first-party AI tools and systems.
    • The implementation of automated decision-making.
    • The use of first-party, third-party, and publicly available data to train AI tools and systems.

Given the popularity of generative AI tools, employees are likely using them at work. Many organizations have enabled AI features in popular productivity software. For example:

  • Finance teams may be using generative AI tools to leverage sales data, as well as third-party market data to improve forecasting.
  • Software developers may be leveraging such technology to improve the quality of their code.
  • Businesses may have already enabled AI features in commonly used applications to assist in writing emails, taking notes, or creating presentations.

Individuals in your organization may also be developing their own AI applications or training large-language models using customer data or information found online. These uses can create substantial benefits for your business, though they also pose risks.

Businesses that do not adopt or update AI policies may miss easy wins, such as the opportunity to employ current business processes to vet third-party AI tools, which could help them stay in compliance. Implementing an AI policy, even as a work in progress, can set a tone for responsible uses of AI to fuel innovation for business.

  1. Review your privacy disclosures and update them as needed. Take care to ensure your privacy disclosures, such as your public privacy statement or employee privacy policy:
  • Provide sufficient transparency with respect to your collection, storage, and uses of personal information, including any uses of automated decision-making and AI; and
  • Do not misrepresent the extent to which you maintain and protect the privacy, confidentiality, security, or integrity of consumer or employee personal information.
  1. Implement processes to identify, document, assess, and monitor risks of existing and new uses of AI. Firms may want to begin to document existing uses of AI across their operations, even if such uses have not previously been formally vetted or approved. Such documentation should demonstrate that your organization considered the risks of such use, implemented mitigations as necessary, considered the need for employee training related to AI, and had a plan to monitor and test the use of AI for risks related to accuracy, integrity, and quality. The documentation should also record technical aspects of the AI, such as the inputs and outputs, and the logic underpinning AI tools and systems. Such documentation can reduce risks related to existing consumer protection laws and may facilitate compliance as the laws evolve.
  2. Assess compliance with applicable existing laws and make necessary investments.

For example, as many uses of AI involve the use of personal information, it is likely that compliance with existing privacy laws, such as the GDPR and the provisions of the CCPA that have been implemented, will empower your firm to adhere to new AI requirements with greater ease.

  1. Review your vendor processes and agreements. Ensure your organization properly vets and reassesses your vendors’ practices if your organization leverages third-party AI services, particularly those (1) with access to consumer or employee data or your confidential information and (2) those involved in your organization’s own development of AI systems and tools. Review your vendor agreements to determine if they contain sufficient obligations to reduce AI risks. For example, depending on the vendor service, you may consider additional privacy and security obligations, transparency and audit rights, and representations and warranties related to AI inputs or outputs.

Heading into 2024, it’s crucial that leaders closely monitor updates to the laws, regulations, and industry standards that will shape the evolution of AI globally. Doing so will help businesses anticipate significant developments and adapt to them successfully.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Schwabe, Williamson & Wyatt PC | Attorney Advertising

Written by:

Schwabe, Williamson & Wyatt PC
Contact
more
less

Schwabe, Williamson & Wyatt PC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide