Preparing for an Era of Regulated Artificial Intelligence

Troutman Pepper
Contact

Troutman Pepper

Published in Law360 on January 25, 2023. © Copyright 2023, Portfolio Media, Inc., publisher of Law360. Reprinted here with permission.

In recent months, there has been an explosion of artificial intelligence tools that have given even technophobes an opportunity to test AI's power from the comfort of their favorite web browser.

From DALL-E's ability to generate digital images from natural language prompts to ChatGPT's ability to answer questions, write blog posts, essays, poetry or even song lyrics, today's AI tools can be used by anyone who can open a web browser.

Behind the scenes of these AI tools and more powerful ones long employed by corporations and government entities, algorithms are hard at work. Although algorithms are a series of objective mathematical instructions, critics have claimed that in some contexts, unless precautions are taken, they can amplify pre-existing subjective biases and worsen socioeconomic disparities based on the data fed to them.

In turn, state attorneys general are now turning their sights to this fast-developing technology. Through lawsuits and, most notably, legislation introduced by Washington, D.C., Attorney General Karl Racine, who left office at the end of 2022, state attorneys general are making their presence known as AI technology is increasingly substituting for human decision making.

As AI becomes increasingly responsible for making decisions concerning a multitude of aspects of our lives, some observers have asserted that flaws in the system can lead to generational consequences. Industry watchers are calling for stems to be regulated to help rein in the widespread belief of discriminatory tendencies and prevent any further proliferating within AI technologies.

Algorithms and the data they are trained on power AI

Algorithms power AI. They are mathematical instructions that are programmed to solve a problem by instructing a computer to perform a series of predefined "if," "and," "or" or "or not" statements — instructions that have been present in computer programming since at least the 1970s. By layering levels of complex algorithms on top of enormous pools of data, AI can answer complex questions and make recommendations that only humans were thought to be capable of.

The data used to train an AI model is just as important as the algorithms on which the model is built, because the data helps teach the model what the right answers are. In addition, many AI models are trained over time with varying levels of human involvement, from supervised learning to entirely unsupervised learning.

Because of the complexity of these models, it is frequently the case that after AI models have become fully formed, humans — even those that designed the models — have trouble understanding how variables have been combined and assessed to solve the problem at hand and generate an output. Without adequate oversight, the end result of this complexity is a black box of decision making.

Biased and discriminatory training data can lead to AI models that create biased and discriminatory outcomes. Imagine a company wants to develop an AI tool to weed through job applicants' resumes to quickly whittle down large numbers of resumes to only a handful and avoid the implicit bias human evaluators might bring to the process. Thus, the company trains its AI model based on resumes of employees it hired over the past two decades.

But because the company rarely hired people with ethnic last names until recently, the AI model could show a strong preference for nonethnic last names, regardless of other factors, based on the data with which it was trained. Each time the model recommends a candidate with a nonethnic last name without it being trained to recommend candidates with ethnic names, the biased and discriminatory nature of the model compounds.

Even though the company had a noble goal of a nonbiased, nondiscriminatory vetting process for resumes, the subjectivity of the underlying data that trained its AI model infiltrated the system and created a discriminatory outcome. The AI ends up reinforcing the very implicit bias it was intended to avoid.

Discriminatory algorithms are not theoretical

Observers note that algorithms trained by flawed data that produce discriminatory outcomes are not theoretical concerns. The use of potentially problematic algorithms in four particular industries that may impact fundamental aspects of human life are driving state attorneys general to target discriminatory algorithms.

  • Employment, where AI may be used to test applicants' abilities or weed out job seekers whose skills and backgrounds as described on their resumes are not matches for the positions for which they have applied;

  • Health care, where hospitals, health care providers and insurers have relied on algorithms with the goal of making better decisions about how to treat patients;

  • Financial services, where companies use algorithms to assess a candidate's creditworthiness using consumer data to make decisions to extend or increase credit to individuals and businesses; and

  • Tenant screening, where U.S. landlords [1] receive tenant screening reports from companies that employ algorithms to determine whether would-be renters should be offered leases.

In each of these instances, critics have raised concerns about the potential for discriminatory outcomes, resulting in increased regulatory attention.

Regulators are taking aim at discriminatory algorithms

For the last year, D.C. Attorney General Karl Racine led the charge to eliminate discriminatory algorithms with his Stop Discrimination by Algorithms Act of 2021. [2]

If passed, the act, as originally drafted in December 2021, [3] would, among other things:

  • Make it illegal for corporations and organizations to use algorithms that make eligibility determinations based on "an individual's or class of individuals' actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important life opportunities unavailable to an individual or class of individuals." These life opportunities include access to, approval for or offer of credit, education, employment, housing, a place of public accommodation or insurance.

  • Require companies and organizations to audit their algorithms annually for discriminatory patterns, and document how they built their algorithms, how the algorithms make determinations, and all the determinations made by them.

  • Require companies and organizations to disclose to consumers, in plain English, information about their use of algorithms to reach decisions, the personal information they collect, and how their algorithms use that information to reach decisions. In addition, they would be required to provide in-depth explanations about unfavorable decisions and to allow consumers an opportunity to correct inaccurate personal information that could lead to unfavorable decisions.

The act provides the D.C. attorney general with enforcement authority and the ability to levy up to $10,000 in civil penalties per violation. The act also provides a private cause of action for violations, for which courts may award between $100 and $10,000 per violation, or actual damages, whichever is greater.

Several states already have enacted legislation aimed at regulating AI applications, while many others have such legislation pending.

For example, California's Consumer Protection Act, as amended by the California Privacy Rights Act, directs the California Privacy Protection Agency to issue regulations concerning an individual's access and opt-out rights with respect to businesses' use of automated decision-making technology.

The Virginia Consumer Data Protection Act, Colorado Privacy Act and Connecticut Data Privacy Act also require businesses to provide consumers with the opportunity to opt-out of certain automated decision processes that profile consumers in furtherance of decisions impacting financial, lending, housing and insurance determinations.

The CCPA/CPRA and VCDPA became effective Jan. 1, while the CPA and CTDPA take effect on July 1. Additional regulations and rulemaking are expected this year which will clarify the scope of enacted legislation.

In the absence of legislation, however, some state attorneys general are already taking action to regulate companies' use of algorithms. For example:

  • In March 2020, the current Vermont Attorney General Thomas Donovan sued Clearview AI [4] over the company's use of facial recognition technology. Donovan alleged Clearview used facial recognition technology to map the faces of individuals, including children, and sold the data to businesses and law enforcement in violation of the Vermont Consumer Protection Act.

  • In August 2022, California Attorney General Rob Bonta sent a letter [5] to hospital CEOs across California, opening an inquiry into their use of potentially biased algorithms.

  • In May 2022, the National Association of Attorneys General announced the creation of the NAAG Center on Cyber and Technology, [6] which will develop resources to support state attorneys general in understanding emerging technologies, including machine learning, artificial intelligence and the potential bias and discrimination that may result.

International regulators are also taking measures to address potential bias resulting from the use of artificial intelligence applications. The European Union is currently developing the EU AI Act, which will carry harsher penalties than the General Data Protection Regulation Description for privacy violations. [7]

The EU AI Act follows a risk-based approach prohibiting unacceptable systems, creating requirements for high-risk systems and establishing transparency obligations for certain systems, including non-high-risk systems.

The EU AI Act will apply to all businesses deploying AI systems that impact EU consumers — not just businesses located in the EU. As a result, many see the EU framework as the gold standard.

Countries outside the EU, such as Brazil, already are following the EU's lead, while China and Ireland, among others, are currently developing their own legislation to regulate the use of AI in consumer applications.

Welcome to the era of regulated artificial intelligence

While Racine may have been the first state attorney general to try to combat discriminatory algorithms through legislation, he is unlikely to be the last. Other state attorneys general are likely to use current laws, in addition to proposing new ones, to pursue the companies and organizations that create or use algorithms that they view as causing discriminatory outcomes.

We are still in the early days of state attorneys general regulating AI and algorithms through statehouses and courthouses. Recognizing that state attorneys general are closely scrutinizing the use of algorithms for discriminatory impact, the companies and organizations creating or using AI should focus their compliance, research and development efforts accordingly.

How should companies respond?

In light of the developing regulations aimed at governing the use of AI, not only in the U.S. but around the globe, companies should be thoughtful as they begin to deploy AI technologies and develop AI programs. In order to maintain competitive advantages and minimize disruptions caused by increasing regulations, companies should focus on fundamental principles that are driving the development of AI regulations by implementing recognized best practices such as:

  • Conducting ethics assessments to identify discriminatory impacts and privacy implications such as identifiability;

  • Establishing AI internal ethical charters outlining ethical data collection and use requirements and procedures; and

  • Developing contractual requirements and minimum data handling and security controls for vendors and third parties with whom data is shared to pass along such ethical requirements and use restrictions (e.g., prevent discrimination, targeted advertising, location tracking restrictions).

Businesses should start implementing these practices as soon as possible to avoid potentially costly business practice changes, such as having to delete data in their data lake, restart data collection or retrain algorithms if data collection and use is not done in accordance with the new requirements.


[1] https://www.nytimes.com/2020/05/28/business/renters-background-checks.html.

[2] https://oag.dc.gov/release/ag-racine-introduces-legislation-stop.

[3] https://oag.dc.gov/sites/default/files/2021-12/DC-Bill-SDAA-FINAL-to-file-.pdf.

[4] https://ago.vermont.gov/blog/2020/03/10/attorney-general-donovan-sues-clearview-ai-for-violations-of-consumer-protection-act-and-data-broker-law/

[5] https://oag.ca.gov/system/files/attachments/press-docs/8-31-22%20HRA%20Letter.pdf.

[6] https://www.naag.org/press-releases/naag-announces-formation-of-center-on-cyber-and-technology/.

[7] https://www.troutman.com/insights/the-eu-is-throwing-stones-in-the-data-lake-by-regulating-ai-what-global-companies-need-to-do-now-to-prepare.html.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Troutman Pepper | Attorney Advertising

Written by:

Troutman Pepper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Troutman Pepper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide