Virginia Looks to Regulate Artificial Intelligence

Fox Rothschild LLP
Contact

Fox Rothschild LLP

Here are some key points to know.

Definitions

  • AI is defined as “technology that uses data to train statistical models for the purpose of enabling a computer system or service to autonomously perform any task, including visual perception, language processing, and speech recognition, that is normally associated with human intelligence or perception.”
  • “Consequential decision” is used in place of “legal or similarly significant effects” (as in GDPR and the other state laws) and has a similar, but narrower, definition requiring “material” effect on access to credit, criminal justice, education, employment, health care, housing or insurance.
  • “High-risk artificial intelligence system” means any artificial intelligence system that is specifically intended to autonomously make, or be a controlling factor in making, a consequential decision. A system or service is not a “high-risk artificial intelligence system” if it is intended to (i) perform a narrow procedural task, (ii) improve the result of a previously completed human activity, (iii) detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review, or (iv) perform a preparatory task to an assessment relevant to a consequential decision.

Requirements for Developers

Developers of high risk AI systems can’t sell, lease, give or otherwise provide such system to a deployer without:

  • Statement of the intended uses.
  • Documentation setting forth the known limitations of the system, the purpose, the intended benefits, how the system was evaluated, measures taking to mitigate discrimination and how the system can be used for making consequential decisions.
  • Providing the deployer the technical capability to access all information / documentation to conduct an impact assessment.

Requirements for Developers of Generative AI: (Starting 10/1/24)

Can’t sell to consumers or anyone doing business in Virginia unless the GenAI system:

  • Reduces and mitigates the reasonably foreseeable risks.
  • Exclusively incorporate and processes datasets that are subject to data governance measures that are appropriate for generative artificial intelligence systems, including data governance measures to examine the suitability of data sources for possible biases and appropriate mitigation.
  • Achieves, throughout the life cycle of such generative artificial intelligence system, appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity, as assessed through appropriate methods, including model evaluation involving independent experts, documented analysis, and extensive testing, during conceptualization, design and development of such generative artificial intelligence system.

Unless the developer conducted an impact assessment that assesses:

  • Intended purpose
  • The extent to which AI will be used
  • Extent to which prior use of such AI has harmed/adversely impacted individuals or gave rise to concern of such
  • Potential extent for adverse impact/harm
  • Extent to which the individuals potentially impacted are dependent on the outcome (eg. b/c they can’t opt out)
  • Extent to which the individuals who many be harmed belong to a vulnerable population
  • Extent to which the outcomes produced are reversible.

Can’t give it to a search engine operator or social media platform operator without providing to such search engine operator or social media platform operator the technical capability such search engine operator or social media platform operator reasonably requires to perform such search engine operator’s or social media platform operator’s duties.

Requirements for Deployers:

Before using high risk AI systems for consequential decisions:

  • Avoid risk of algorithmic discrimination
  • Implement a risk management policy/program that is (1) at least as strict as the AI RMF or other nationally recognized AI management framework (2) reasonable for the size, complexity, nature scope and sensitivity of the data
  • Completed an impact assessment before deploying and not later than 90 days after each update.

The impact assessment needs to include:

  • Purpose and risk of algorithmic discrimination
  • (if applicable) the extent that was used in a manner consistent or different from developer’s intended use
  • Description of the data processed as inputs and the outputs
  • (if applicable) data used to retrain the system
    Transparency measures taken
    (if applicable) any post deployments monitoring performance and user safeguards (including oversight processes).
  • There are some carveouts / exceptions – e.g. for law enforcement, research, live saving – but the burden of proof is on the party trying to rely on the exemption.
  • Enforcement by the AG with statutory fines.
  • Effective date for some developer obligations is 10/1/24 and 7/1/26 for deployers

[View source.]

Written by:

Fox Rothschild LLP
Contact
more
less

Fox Rothschild LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide