OCC Semiannual Risk Perspective Identifies AI as “Emerging Risk”

Robinson+Cole Data Privacy + Security Insider
Contact

The Office of the Controller of the Currency (OCC) issues a semiannual risk perspective report that “addresses key issues facing banks, focusing on those that pose threats to the safety and soundness of banks and their compliance with applicable laws and regulations.” The most recent report “presents data in five main areas: the operating environment, bank performance, special topics in emerging risks, trends in key risks, and supervisory actions.”

One of the special topics in emerging risks is artificial intelligence (AI). Although the OCC acknowledges the potential benefit of using AI in the banking industry, it also acknowledges the risks associated with its use, particularly generative AI tools.

The OCC states: “Consistent with existing supervisory guidance, it is important that banks manage AI use in a safe, sound, and fair manner, commensurate with the materiality and complexity of the particular risk of the activity or business process(es) supported by AI usage. It is important for banks to identify, measure, monitor, and control risks arising from AI use as they would for the use of any other technology.” Although this general statement is a no brainer, banks need better guidance on how to deal with the risks associated with AI. Telling the banking industry that the OCC is “monitoring” the use of AI is not particularly helpful.

As a former regulator, my sense is that it would be helpful if regulators would provide solid guidance to regulated industries on how the use of AI will be regulated. The risks associated with AI have been documented and are known. We are already behind in mitigating those risks. Regulators must take an active role to shape appropriate uses and mitigate risks posed by the use of AI and not wait until bad things happen to consumers. Regulations are always behind reality, and this is no exception.

One risk that is obvious and concerning to me is the use of voice recognition technology by banks and financial institutions to authenticate customers. With the astonishingly accurate depictions of voices by AI generated tools, threat actors are and will be using deep fakes with financial institutions to perpetrate fraud. Why don’t we just start there? Let’s figure out how financial institutions can identify customers without using Social Security numbers or voice recognition.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Robinson+Cole Data Privacy + Security Insider | Attorney Advertising

Written by:

Robinson+Cole Data Privacy + Security Insider
Contact
more
less

Robinson+Cole Data Privacy + Security Insider on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide