AI Washing: SEC Enforcement Actions Underscore the Need for Companies to Stick to the Facts on Artificial Intelligence

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

The SEC settled charges last week against two investment advisers for making false and misleading statements about their purported use of artificial intelligence (AI), including claims that AI was used to inform investment decisions.  The charges come on the heels of last month’s SEC enforcement action against an individual and his company for making fraudulent claims about the purported use of AI in the offer and sale of certain securities. 

For months, SEC leadership has offered repeated warnings about “AI washing” – falsely claiming the use of AI or machine learning models or misrepresenting their application.  Given the growth in AI tools and applications, and growing claims about companies’ AI capabilities, the SEC and other regulators have expressed increasing concerns about businesses misrepresenting or exaggerating their use of AI.

The SEC’s actions demonstrate that the Commission’s warnings about AI washing were not lip service.  Companies touting their AI use or capabilities must have a reasonable basis for those claims and ensure that any statements about their use of AI are not false or misleading.  As SEC Enforcement Director Gurbir S. Grewal noted, companies must watch for “misstatements” as AI-related claims “may be material to individuals’ investing decisions.”

The SEC Enforcement Actions

The SEC brought the March 18 actions against Delphia (USA) Inc. and Global Predictions Inc. under Section 206 of the Investment Advisers Act of 1940, which generally makes it unlawful for an investment adviser to engage in fraudulent, deceptive or manipulative conduct.  In both cases, the SEC found the investment adviser had neither developed nor implemented the AI capabilities they repeatedly advertised. 

With respect to Delphia, the SEC alleged the company made numerous false and misleading statements in regulatory filings, advertisements and press releases and on its website and social media relating to its purported use of AI to analyze its retail clients’ data in the company’s investment process.  For example, Delphia claimed that it “put collective data to work to make [its] artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else,” and that such data made investment decisions “more robust and accurate.”

Delphia also claimed it used “machine learning to analyze the collective data shared by its members to make intelligent investment decisions.”  Despite having made different versions of these claims since 2019, Delphia admitted in a July 2021 SEC examination that they had not made such use of clients’ data and had not created an algorithm to use such data.  After telling the SEC it would correct relevant statements, however, Delphia continued to make false and misleading statements in advertisements regarding the use of client data.  The SEC ordered it to pay a civil penalty of $225,000.

The SEC levied similar allegations against Global, ordering it to pay a $175,000 civil penalty.  For example, Global had falsely claimed on its website that its technology incorporated “[e]xpert AI-driven forecasts,” and that it was the “first regulated AI financial advisor[.]” The SEC cited Global for a variety of other false and misleading statements and marketing rule violations, including relating to tax-loss harvesting services, the value of assets on the platform and the relative performance of its models.

In addition to these two cases, on February 2, 2024, Brian Sewell and his company, Rockwell Capital Management, agreed to settle fraud charges with the SEC under Section 17(a) of the Securities Act of 1933 and Section 10(b) of the Securities Exchange Act of 1934 in a hedge fund scheme that was never launched.  Among other claims, Sewell falsely claimed that his investment strategies would be guided by and benefit from predictive intelligence developed with the help of “machine algorithms,” “artificial intelligence” and a “machine learning model” – each of which, like the fund, never existed. 

What Companies Need to Know

These three cases may prove to be the opening salvo in what SEC Chair Gary Gensler dubbed the Commission’s “war” on AI fraud.  With the SEC’s proposed rules on the use of AI in finance stalled in the Senate, the SEC appears determined to regulate the use of AI via enforcement. 

Although these cases were about fraud or misrepresentations, the SEC may attempt to extend the AI washing theory to less clear-cut scenarios.  In the aforementioned cases, the advisers promoted AI, AI capabilities or AI-enhanced investing that didn’t exist.  What remains to be seen is how the SEC will handle cases where a company actually uses an AI tool and touts its abilities, but the AI is not as effective as the statements describe or is not used in the advertised manner.  Given the SEC’s public statements, the Commission may assert those scenarios constitute AI washing in violation of securities laws.

Companies should be careful with disclosures and statements.  In light of the SEC’s recent efforts and public statements, companies should approach any disclosures, advertisements or other public statements regarding their use of AI with the same care as they do for disclosures of other material risks.  Companies should be sure that their statements are consistent across all communications.

Issuers or advisers must ensure they have a reasonable basis for the claims they make about their AI use, the specific risks they face and, importantly, be able to defend such claims.  Disclosures about AI should be specific to the risks a company faces, including operational, legal and competitive. These should not be boilerplate or uniform.  And companies should be able to back up claims with supporting material.

AI washing may also open the door to breach-of-fiduciary-duty-type claims against companies’ boards and officers where they failed to oversee or monitor the implementation of AI.  AI washing may also prompt other causes of action, such as private actions under Section 10(b) of the Exchange Act.  Therefore, directors and officers must keep AI representations at the forefront of their minds and monitor the development and implementation of the new technology.  

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide