You Don’t Need New Regulation to Have AI Enforcement Risk

NAVEX
Contact

NAVEX

One year ago, compliance officers began 2023 dazzled by the potential of artificial intelligence, and braced for new regulations to govern how corporate organizations use AI.

Then a funny thing happened: governments didn’t regulate AI anywhere near as much as people expected – and enforcement of improper use of AI started happening anyway.

That’s an important point to remember as we all dive into 2024. Yes, this year we are likely to see more (and more specific) regulation of AI; but compliance officers and internal auditors should beware that existing regulations already pose a serious enforcement risk for companies incorporating AI into their business processes today.

The best example of that comes from the U.S. Federal Trade Commission, which in December barred a large retail chain from using AI-driven facial recognition technology to intercept potential shoplifters. The case is a reminder that even as we embrace cutting-edge technology, basic principles of corporate compliance and risk management are still highly relevant – and that compliance officers should be involved in AI adoption, to confirm those basic principles aren’t ignored.

The basics are as follows. The company wanted to identify people entering its stores who might be potential shoplifters. So, it built a photo library of known shoplifters, either by uploading photos employees had snapped when a shoplifter was apprehended or acquiring photos from law enforcement databases.

Then the company used facial-recognition technology to compare customers entering its stores against that database of known shoplifters. When the AI found a match, it sent an automated alert to the phones of store employees with further instructions: monitor the customer, escort them off the premises, or even (in extreme cases) call police immediately.

So, what can go wrong in that use case? Lots, according to the Federal Trade Commission.

Internal controls, training and more

The FTC flagged numerous shortcomings in how the retailer used its facial recognition technology. Among them:

  • Weak technical controls. Remember how the company was building a database of photos, to compare known shoplifters against in-store customers? The system didn’t have sufficiently strong technical controls to verify the photos were high-quality images, which assures greater accuracy in facial recognition.
  • Insufficient testing of the technology. The company didn’t do enough testing to see how often its system generated false positives (identifying a customer as a known shoplifter by mistake), or whether the rate of false positives changed over time.
  • Poor employee training. Employees only received a few hours’ training on how to use the facial recognition system, and the training was all on the mechanics of how to use the system – not on the risk of false positives, or the potential for AI bias against minority groups.
  • Poor procedures for using the technology. The company also lacked procedures so employees could use the facial recognition system wisely, and in a risk-aware manner – such as requiring them to check a suspected shoplifter’s ID before asking that person to leave the store.

As a result of those shortcomings, the FTC said, the company’s employees mishandled false positives on a regular basis. Employees would confront legitimate customers, erroneously accuse them of shoplifting, and “potentially expose consumers to risks including the restriction of consumers’ ability to make needed purchases, severe emotional distress, reputational harm, or even wrongful arrest.”

That became the legal basis for the FTC to act, and the company settled with a five-year ban on using its AI technology.

The real culprit: poor risk management

What’s most interesting here is that while artificial intelligence was at the heart of the company’s facial recognition technology, the AI itself didn’t create any new compliance risks for the company. Those risks – poor technical controls, insufficient testing, insufficient employee training, and the like – could plague the rollout of any new technology.

The true risk – the one that exists right now, even before governments develop new rules for AI – is in how your company oversees the adoption of new technology.

Compliance and internal audit teams must both play a role here. For example, internal audit could look at your proposed AI innovation and ask, “The data the AI will use: where is that coming from? How can we know that data is accurate and valid?” Compliance teams could think about enforcement risks (privacy or discrimination, for example) and game out how your AI-driven app might trigger such violations.

For any of this to succeed, however, compliance and internal audit teams must be involved in discussions about how your enterprise will adopt AI. It’s when operating units experiment with AI without your input that compliance risks take root and metastasize into severe threats.

The true risk – the one that exists right now, even before governments develop new rules for AI – is in how your company oversees the adoption of new technology.

That should not be news to compliance officers. For years we’ve talked about anti-corruption risks, and the need for compliance officers to be included in strategic plans about international expansion so you can analyze those plans for potential FCPA risks.

That same dynamic is true for artificial intelligence: you need to be involved in senior management’s discussions of how it wants to embrace AI, so that you (and our allies in internal audit) can analyze how those ideas might trigger compliance risks.

The AI is almost incidental. Thoughtful governance and oversight, keeping ethical and compliance issues front of mind – that’s what will keep your company on the right side of enforcement risk.

Read More

Subscribe Here

View original article at Risk & Compliance Matters

Written by:

NAVEX
Contact
more
less

NAVEX on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide