Creating an AI governance function: Part 2

Society of Corporate Compliance and Ethics (SCCE)
Contact

Society of Corporate Compliance and Ethics (SCCE)

[author: Mark Dimond*]

CEP Magazine (April 2024)

This is Part 2 of a two-part series. Part 1 addressed the risks and restrictions organizations face in deploying artificial intelligence (AI) and the key elements of an AI strategy.[1] This part details how to develop an AI governance function.

Steps in creating an AI governance function

The breadth of AI governance requirements can be overwhelming. Instead of addressing everything at once, break program development into a series of steps (see Figure 1).

Figure 1: AI governance function overview

AI governance policies

The next step is developing and updating your governance policies. An AI governance policy sets out the organization’s compliant, transparent, and ethical use of AI. It details how AI should be used, safeguards employees, and ensures compliance with regulatory requirements. This is your overall “guiding light” to demonstrate to others that you are using AI responsibly.

AI governance process development

Once the policies are in place, the next step is to develop governance processes. These processes will be needed both in the initial system development and ongoing use.

AI governance process execution and ongoing audit

The next step is execution. Once governance processes are developed, they need to be executed and applied both throughout system development and on a scheduled, repeated, ongoing basis after launch. Regulatory requirement changes should drive updates to policies and possibly a review of system design. Input data should continue to have its provenance established, and any issues detected should be resolved. Likewise, companies need to be vigilant that any supplementary data inputs do not contain sensitive information.

Creating a compliant and defensible program

How can an organization ensure it is using AI in a compliant way? In short, trust your processes. Develop a comprehensive approach with the appropriate governance processes. Regulators realize that the rapid growth of AI makes regulating the technology itself very difficult. Therefore, most regulators are focusing on how organizations are using AI. Companies need to demonstrate that they are trying to follow the rules, even if the rules themselves are general or not prescriptive.

Perhaps the biggest risk with AI governance is waiting for clear and well-prescribed regulatory and legal rules. The regulatory and legal uncertainty surrounding AI is not going away soon, and the wait for complete clarity could be a long one. Build out your policies and processes despite the gray areas. Be consistent in your execution. A well-executed, well-intentioned, but slightly imperfect approach is more compliant and defensible than waiting. Don’t let perfect be the enemy of good.

Final thoughts

AI and its required governance present a challenge for organizations. This new, complex technology faces a somewhat chaotic legal and regulatory environment. This challenge represents a leadership opportunity for in-house compliance professionals to change the tug-of-war conversation of “we have to do this” versus “this is what can be done” to let us work together and focus on what we can do. Compliance professionals to the rescue.

* Mark Diamond is the CEO of Contoural Inc. in Los Altos, California, USA.


1 Mark Diamond, “Creating an AI governance function: Part 1,” CEP Magazine, March 2024, https://compliancecosmos.org/creating-ai-governance-function-part-1.

Written by:

Society of Corporate Compliance and Ethics (SCCE)
Contact
more
less

Society of Corporate Compliance and Ethics (SCCE) on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide