Deloitte poll: AI use expected to increase in risk and compliance efforts

Health Care Compliance Association (HCCA)
Contact

Health Care Compliance Association (HCCA)

ethikos 33, no. 12 (December 2019)

According to an October 28 Deloitte press release, “Nearly half (48.5%) of C-suite and other executives at organizations that use artificial intelligence (AI) expect to increase AI use for risk management and compliance efforts in the year ahead, according to a recent Deloitte poll. Yet, only 21.1% of respondents report that their organizations have an ethical framework in place for AI use within risk management and compliance programs.” Link to the press release: https://prn.to/2pQiOsw

The Deloitte press release included the following questions that leaders may ask:

  • Did we set the right tone at the top for our organization on AI ethics?

  • What organizational standards have we developed for ethical use of AI?

  • Did we conduct an AI ethics gap analysis?

  • Do we have a plan in place to educate our workforce on AI?

  • Did we alert product teams on what to look for in monitoring AI solutions for ethical compliance?

Ethikos solicited and received questions from ethics professionals that may also be considered:

From Colleen Dorsey, JD, Director of Organizational Ethics and Compliance, University of St. Thomas:

With respect to authority regarding the use of AI, consider these questions:

  1. Who has the ultimate say in how to move forward?

  2. How and when will progress be reported to the board?

  3. Are further governance actions necessary to allow for the AI/ML product or project to move forward:

    1. Are there policies that need to be created – or updated?

    2. Are Code of Conduct updates necessary?

    3. Should a separate technology or data practices code be created?

    4. Are board subcommittees necessary – or a good idea? Who needs to approve them if they are?

From Frank Bucaro, CSP, CPAE, Thought Leader on Values-Based Leadership Development, Frank C. Bucaro and Associates:

  1. Are the AI decisions in sync with corporate values and mission statements?

  2. Are accountability “checks and balances” built into the development and usage of AI?

  3. Can someone actually make the decision to “pull the plug” if something goes awry?

From Marianne M. Jennings, Emeritus Professor of Legal and Ethical Studies in Business, W.P. Carey School of Business, Arizona State University:

  1. Do you know how the IT people are currently capturing and using information?

  2. What is the overall plan?

  3. Why are we going to use it?

  4. Are there areas where we will never go?

From Carl Oliver, PhD, MBA, Teaching Fellow, California Lutheran University:

  1. I suspect Stakeholder Theory needs to play an important role. Who are all the stakeholders?

  2. What are the stakeholders’ general and specific interests? A danger to avoid would be Situational Ethics―judging an act from its context rather than by "absolute moral standards."

From Patrick Wellens, a Global Compliance Business Partner for one of the divisions of a multinational pharma company:

  1. If AI is used in decision-making, is the algorithm documented and can it be replicated so that ethical bias can be excluded?

  2. What are the governance standards in place (e.g., what are the approval levels and which department reviews whether the AI application follows ethical framework, which is the data retention period)?

  3. Has the company a crisis management and communication plan in place in case unethical use of AI occurs?

Patrick added the following cautions:

  • As noted above, if AI is being used for decision-making on, for example, risk management in third-party due diligence, then the algorithms should be documented and available to show to regulators, especially when the risk management methodology and/or the due diligence in third parties did not work.

  • If companies use third parties (e.g., external recruitment company, solution providers of Big Data analytics, solution providers of third-party due diligence software) that apply AI on behalf of the company, the company should ensure that all third parties abide by an ethical code of conduct in the use of AI. The company should have audit clauses in place to monitor whether the ethical code of conduct is being followed.

[View source.]

Written by:

Health Care Compliance Association (HCCA)
Contact
more
less

Health Care Compliance Association (HCCA) on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide