SEC Proposes AI Rules for Broker-Dealers and Advisers After Chair’s Warnings

BakerHostetler

Key Takeaways

  • On July 17, 2023, Gary Gensler, the Chair of the U.S. Securities and Exchange Commission (SEC or Commission), made public statements concerning the use and potential risks of artificial intelligence (AI) technologies in the securities industry, and specifically identified five potential risks on a micro and macro basis – bias, conflicts of interest, financial fraud, privacy and intellectual property concerns, and the stability of the markets – for further review and consideration by the Commission.
  • On July 26, 2023, the SEC proposed rules (AI Rule Proposal)[1] that would, among other things, require broker-dealers and investment advisers to eliminate or neutralize the effect of certain conflicts of interest associated with their use of AI and other technologies that optimize for, predict, guide, forecast or direct investment-related behaviors or outcomes.
  • If adopted, the AI Rule Proposal – which spans 243 pages and includes 93 questions seeking public comment within 60 days after the AI Rule Proposal is published in the Federal Register – would foist significant compliance undertakings on all firms, not only firms that are currently using AI technologies.
  • Because the AI Rule Proposal deals with only one of the five potential risk areas –conflicts of interest – it is possible that the SEC will propose additional rules to address other identified risk areas.

Chair Gensler’s Public Statements Warning of AI’s Impact on Securities Industry

In his speech before the National Press Club, Chair Gensler focused on the tremendous opportunities AI technology presents but cautioned that it could “heighten financial fragility” and thus require regulatory attention.[2] Although Chair Gensler recognized that “AI is already very much embedded in our capital markets,”[3] including use in “call centers, account openings, compliance programs, trading algorithms, sentiment analysis, and more,” AI is also rapidly changing robo-advisors, brokerage apps, and even more generally the overall market structure.[4] That is because, according to Chair Gensler, AI is “the most transformative technology of our time.”[5]

Chair Gensler then outlined the challenges and risks that AI accentuates from both a macro and micro perspective. From a macro perspective, Chair Gensler highlighted concerns relating to data privacy, intellectual property, and financial stability.[6]

  • Data Privacy and Intellectual Property. Because AI models are based on vast data sets that are not uniformly gathered, questions may arise concerning who owns the data, who has control over the data, and who has rights to the data. Data privacy rights with respect to personal information included in those datasets, and intellectual property rights with respect to copyrighted and other protected information that may have been scraped from the Internet have broad implications.[7] While these are important concerns, they likely fall beyond the SEC’s regulatory purview or are already covered by existing regulations insofar as Regulations S-P and S-ID mandate appropriate safeguards for personal data and controls to protect against identity theft.
  • Financial Stability. Chair Gensler raised a concern that AI might “heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator” which is “exacerbate[d by] the inherent network interconnectedness of the global financial system.” In other words, inaccuracies in a base AI model could have cascading effects that could trigger the next financial crisis because large segments of the industry rely on the same flawed model or erroneous information to make financial decisions.[8] In particular, Chair Gensler warned of how generative AI oftentimes creates fabricated information (or “hallucinations”).[9] This in turn, creates risks that these hallucinations and other inaccurate information are fed back into the base model.

From a micro perspective, Chair Gensler asserted that the use of AI models to make predictions about users and provide individualized outcomes (or “narrowcasting”) poses risks of bias, deception, and conflicts of interest.[10]

  • Bias. AI algorithms are complex, and the data sets on which they are based are vast, making AI models often unexplainable. Accordingly, it can be difficult to understand how they arrived at their results. This challenges fairness in outcomes, particularly where those data sets the predictive algorithm relies upon reflect historical biases and “latent features that may inadvertently be proxies for protected characteristics.” According to Chair Gensler, these data analytic challenges are not new, and recently, the Consumer Financial Protection Bureau (CFPB) has engaged in digital redlining, including bias in algorithms and technologies marketed as AI, so that the CFPB can better protect homebuyers and homeowners from bias within home evaluations and in the appraisals process through rulemaking. already looking into algorithms that utilize or are predicated on data that reflect the biases that are already present in society.[11]
  • Deception. With the increased adoption of AI comes an increased risk of it being used by bad actors to deceive consumers and the broader financial market. Chair Gensler emphasized that under the federal securities laws, fraud is fraud, and the “SEC is focused on identifying and prosecuting any form of fraud that might threaten investors, capital formation, or the markets more broadly.”[12] This includes the industry’s use of AI.
  • Conflicts of Interest. Where advisers or broker-dealers place personal interests before the interests of their clients within AI models, they risk creating conflicts of interest. As a result of this concern, Chair Gensler noted he had requested rule proposals from SEC staff on how to address such potential conflicts of interest.[13] The AI Rule Proposal is the apparent initial product of their work.

Proposed Rule Addressing Conflicts of Interest Associate with the Use of Predictive Data Analytics

In the AI Rule Proposal, the SEC recognized that broker-dealers and investment advisers are increasingly using predictive data analytics (PDA) and other emerging technologies that, depending on their configuration and use, may place their interests ahead of their clients (either intentionally or unintentionally) and, as a result, harm investors.[14] Given this risk, the SEC proposed the new rules that, if adopted, would create sweeping compliance obligations that could have far-reaching implications beyond the securities industry’s use of AI technology.

In an attempt to make the proposed rules evergreen, the SEC broadly defined their terms without including limiting principles. For example, the proposed definition of a “covered technology” is so expansively drafted that it would reach common applications and other analytical processes or methods that are not intended to be covered by the proposed rules. Similarly, the proposed definition of “investor interaction” is not limited to a firm’s client and could be interpreted to extend to interactions with other investors to whom the firm owes no duty of care. Moreover, the proposed definition of “conflicts of interest” would include any instances when a covered technology is used that takes into consideration an interest of the firm without any consideration of the materiality of those interests.

Viewed through this broad lens, the proposed rules would require that broker-dealers and investment advisers eliminate or neutralize the effect of any conflict of interest that places the interest of the respective firm ahead of the interest of investors by doing the following.

  • Adopt, Implement, and Maintain Written Policies and Procedures. The proposed rules would also require firms to document compliance by adopting, implementing, and maintaining written policies and procedures that evaluate the use or potential use of PDA and other covered technologies to identify whether they involve conflicts of interest where the firm is putting its interests ahead of investor and, if so, taking steps to eliminate or neutralize the effect of those conflicts of interest.
  • Include Four Key Components of Compliance Program. While the SEC noted that the process to achieve compliance with the rule is risk-based and not a one-size-fits-all approach,[15] it did prescribe that compliance programs must have written descriptions of the process by which a firm (i) evaluates the use or potential use of a covered technology in any investor interaction, (ii) identifies whether any conflicts of interest that place the firm’s interest ahead of investors exist, and (iii) determines how to eliminate or neutralize the effect of any such conflicts of interest. Firms must review these components at least annually to ensure their adequacy. The AI Rule Proposal noted that interactions solely for purposes of providing clerical, ministerial, or general administrative support – like anti-money laundering checks or general communications to open new accounts – are not within scope of the proposed rules.
  • Maintain Books and Records Documenting Compliance. According to the proposed rules, firms must also maintain records of their evaluations, including (i) a list of all covered technologies used in investor interactions and when each was first implemented and materially modified, (ii) the date of any testing of covered technology as well as any actual or potential conflicts of interest identified therein and any resulting modifications or restrictions

It should be noted that the proposed rules were not unanimously supported by the Commission as evidenced by both Commissioners Hester Peirce’s and Mark Uyeda’s dissenting statements. Commissioner Peirce criticized the proposal as unnecessary and hostile to certain types of technology, and because it disregards pillars of the SEC’s historical regulatory structure.[16] According to Commissioner Pierce, the proposed rules are unnecessary because conflicts of interest are already addressed by an adviser’s fiduciary duty and a broker-dealer’s obligations pursuant to Regulation Best Interest. And in response to the proposing release’s claim that it took a “technology neutral” approach, Commissioner Peirce stated that the proposed rules are hostile to AI and other emerging technologies because they single them out in a way that makes using those technologies operationally infeasible. She further warned that the proposed rules reject one of the central pillars of the SEC’s regulatory regime—disclosure—by taking the position that no level of disclosure would permit a firm to proceed without eliminating or neutralizing the conflict of interest. Commissioner Uyeda similarly criticized the proposal as unnecessary for the same reasons, while also pointing out how “breathtakingly broad” and vague it would be.[17] Both Commissioners Peirce and Uyeda explained that the proposal “encompass[es] nearly everything” and, by its terms, would sweep in spreadsheets, commonly used software, math formulas, and even electronic calculators into its obligations.

Accordingly, it is advisable for any affected firm to carefully review the AI Rule Proposal and provide comments to the SEC either individually or through various industry groups on how to appropriately scope the SEC’s regulatory guidance on this important issue.

* * *


[1] Proposed Rule, Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, File No. S7-12-23 (July 26, 2023), https://www.sec.gov/files/rules/proposed/2023/34-97990.pdf (“Proposing Release”).

[2] Chair Gary Gensler, “Isaac Newton to AI” Remarks before the National Press Club, Sec. & Exchange Comm’n (July 17, 2023), available at https://www.sec.gov/news/speech/gensler-isaac-newton-ai-remarks-07-17-2023.

[3] Jennifer Schonberger, SEC Chair warns of AI’s potential role in future financial crises: YF Exclusive, Yahoo! Finance (July 17, 2023), available at https://finance.yahoo.com/video/sec-chair-warns-ais-potential-201110024.html.

[4] Id.

[5] Chair Gensler, supra note 2.

[6] Id.

[7] Id.

[8] Id.

[9] Schonberger, supra note 3.

[10] Chair Gensler, supra note 2.

[11] U.S. Consumer Fin. Protection Bureau, CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior (Apr. 25, 2023), available at https://www.consumerfinance.gov/about-us/newsroom/cfpb-federal-partners-confirm-automated-systems-advanced-technology-not-an-excuse-for-lawbreaking-behavior/#:~:text=Digital%20redlining%3A%20The%20CFPB%20has,valuations%20and%20appraisals%20through%20rulemaking, see also Schonberger, supra note 3.

[12] Id.

[13] Chair Gensler, supra note 2.

[14] Proposing Release at 6.

[15] Proposing Release at 116 n.200.

[16] Public Statement, Commissioner Hester M. Peirce, Through the Looking Glass: Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Adviser Proposal (July 26, 2023), https://www.sec.gov/news/statement/peirce-statement-predictive-data-analytics-072623.

[17] Public Statement, Commissioner Mark T. Uyeda, Statement on the Proposals re: Conflict of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (July 26, 2023), https://www.sec.gov/news/statement/uyeda-statement-predictive-data-analytics-072623.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide