Artificial intelligence and the rise of the regulators

McAfee & Taft
Contact

McAfee & Taft

  • The U.S. Department of Justice announced that it would begin to “assess a company’s ability to manage AI-related risks as part of its overall compliance efforts,” according to Deputy Attorney General Lisa Monaco.  At the American Bar Association’s annual white collar crime conference, Monaco advised that “compliance officers should take note” and detailed the Department’s new “Justice AI” initiative. She also cautioned companies that “[f]raud using AI is still fraud.” These remarks followed the Department’s recent groundbreaking indictment of a Google engineer for stealing AI secrets from the tech giant and allegedly transferring them to foreign entities.
  • The California Privacy Protection Agency (CPPA) released draft regulations on “Automated Decisionmaking Technology.” The revised regulations include several noteworthy updates to key definitions, as well the addition of new disclosures detailing how the CPPA expects businesses to comply with notice requirements. Formal adoption and rulemaking based on these draft regulations will continue throughout the year.
  • And after months of effort, members of European Parliament finally pushed the EU Artificial Intelligence Act over the finish line this week.  The Act will drive business constraints by risk type, as well as impose data governance and human oversight requirements. The Act is the most comprehensive framework on AI to date and is certain to have a global impact.

Actionable next steps for businesses

As you can see, regulators at every level are beginning to seek a piece of the AI pie. So, how should your company prepare to protect your intellectual property and technology from both risk and regulatory scrutiny?

An initial and simple step for most organizations centers on the expansion of existing acceptable use policies (AUP).  Organizations with more mature risk management and technology governance structures—especially companies evaluating preliminary deployment or utilization of AI—can consider the development of responsible use policies (RUP) that complement existing policies, but that address the emerging risks of emerging technologies.

As companies draft revisions to an existing AUP or craft a new RUP, we recommend assessment of the following considerations and potential provisions:

  1. Principles, Purpose, and Definitions: The foundation of any responsible use policy for AI and emerging technologies should be a clear articulation of the principles and purpose that guide the development and deployment within the organization. These principles should align with the company’s values, ethics, and commitment to responsible innovation. To ensure clarity and consistency, the policy should also provide precise definitions of the AI and emerging technologies covered. This helps establish a common understanding among all stakeholders and prevents ambiguity in interpretation and implementation.
  2. Use Cases, Harms and Testing: Identifying potential use cases and associated risks or harms is crucial for developing effective risk management strategies. Companies should conduct thorough assessments to anticipate and mitigate potential negative impacts, such as bias, discrimination, privacy violation, or unintended consequences. Rigorous testing and validation of AI systems before deployment is essential to identify and rectify any potential issues or biases.
  3. Governance and Risk Management: Robust governance frameworks and risk management processes are essential for overseeing the development, deployment, and monitoring of AI systems. This includes establishing clear roles and responsibilities, reporting lines, and accountability mechanisms to ensure proper oversight and control. As companies increasingly rely on external partners for AI development and deployment, managing these relationships becomes critical. The policy should also address contractual obligations, data sharing agreements, and compliance requirements to ensure that third parties adhere to the same principles and standards of responsible AI use.
  4. Selection and Utilization Processes: Companies should establish well-defined criteria and processes for selecting and utilizing AI technologies, vendors, and partners. Factors such as transparency, accountability, and ethical considerations should be given due weight in both selection and utilization processes to ensure alignment with the company’s principles and values. Responsible use of AI requires clear guidelines for data handling and human oversight.
  5. Cybersecurity and Privacy: And of course, the policy should outline best practices for protecting Company information and stakeholder privacy throughout the data lifecycle, including at collection, deletion, storage, and transfer, as well as mechanisms for ensuring meaningful human control and intervention when necessary to protect privacy and property. Responsible use of emerging technologies mandates strict administrative, physical and technical safeguards to protect privacy and proprietary interests.

Bottom line: As both AI risk and regulation rise, now is the best time for these considerations and additional assessment by business leaders.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© McAfee & Taft | Attorney Advertising

Written by:

McAfee & Taft
Contact
more
less

McAfee & Taft on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide