Leveraging Ethical AI for Effective Compliance

American Conference Institute (ACI)
Contact

Nearly every sector, company, and business function can benefit from AI’s use cases. For legal and compliance teams, for example, imagine the amount of time and resources saved by having a machine quickly scour through and analyze oceans of data—legal and regulatory documents, transactional data, expense reports, social media communications, and more.

Through leveraging AI, in-house counsel and chief compliance officers can more quickly and efficiently spot anomalies or trends in data that may point to fraud or other misconduct—even identifying issues that might have escaped human analysis. In this way, AI theoretically has the potential to mitigate, rather than contribute to, legal and compliance regulatory risk.

The challenge, of course, is that AI is evolving faster than the speed of the governance and regulatory controls needed to keep its ethical and legal usage in check. This is concerning, because information produced with algorithms is far from perfect—potentially fraught with inaccuracies, vulnerable to perpetuating bias and discrimination, consumer data privacy rights, or other harms.

This is where ethical AI, or responsible AI, plays a critical role. While there is really no concrete definition around these terms, IBM succinctly explains AI ethics in this way: “Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a set of guidelines that advise on the design and outcomes of artificial intelligence.”

Regulatory tensions

Regulators increasingly are taking notice of how companies use AI as well, especially as it applies to perpetuating unlawful discrimination and bias in data. Recently, four federal agencies—the Department of Justice (DoJ), the Federal Trade Commission, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission (EEOC)—issued a joint statement warning the private and public sector that they will vigorously enforce their respective laws and regulations to promote responsible AI innovation.

This joint statement follows one year after the EEOC and DoJ each released guidance documents describing how AI used to make employment decisions can perpetuate disability discrimination in violation of the American with Disabilities Act (ADA). The EEOC guidance helpfully provided recommended measures for employers on how to ensure compliance with the ADA when using algorithmic decision-making tools.

For legal and compliance teams, aligning responsible AI with existing laws and regulations is a complex and subjective exercise. Marian Croak, vice president of Responsible AI and Human Centered Technologies at Google, explained the challenges in this way: “Most institutions have only developed principles—and they’re very high-level, abstract principles—in the last five years. There’s a lot of dissension, a lot of conflict in terms of trying to standardize on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use?”

Responsible AI measures

While ethical AI best practices continue to evolve, below are some best-practice AI principles to consider, gathered from the collective insights of leading companies in the AI space.

Appoint a dedicated AI leader. Many companies today are opting to hire a dedicated AI ethics officer. While the title and responsibilities of this role vary greatly company to company, the idea is to have someone lead the company’s responsible AI journey. Microsoft’s Chief Responsible AI Officer Natasha Crampton, for example, leads the company’s Office of Responsible AI, tasked with “building and coordinating the governance structure for the company,” Crampton wrote in a blog post.

Create a senior-level, cross-functional AI working group. In addition to having a dedicated AI ethicist, many leading companies are creating AI working groups with responsibility for driving AI efforts across the company. Ideally, this working group is championed by senior leaders and consists of those who collectively bring to the table both technical skillsets and business knowledge.

Microsoft’s Responsible AI Council is one such exemplary model. Co-chaired by Microsoft President Brad Smith and Chief Technology Officer Kevin Scott, the Responsible AI Council “brings together representatives of our core research, policy, and engineering teams dedicated to responsible AI, including the Aether committee and its Office of Responsible AI, as well as senior business partners who are accountable for implementation,” Crampton wrote.

Establish a set of guiding AI principles. Many leading companies have in place their own set of responsible AI principles from which other companies could draw inspiration. A few good examples include Microsoft’s “Responsible AI Standard,” IBM’s “Principles for Trust and Transparency,” Salesforce’s “Trusted AI” principles, and Google’s AI principles.

Promote inclusivity in AI practices. Leading companies recognize the importance and value of ensuring AI practices are intentionally inclusive and diverse by respecting and weighing how AI impacts society at large. A helpful resource in this respect is the Partnership on AI, which recently established the “Global Task Force for Inclusive AI,” a body of leading practitioners and researchers across academia, civil society, industry, and policy “focused on establishing a framework for ethical and inclusive public engagement practices in the field of AI.”

Embed responsible AI into the fabric of the company. In order to promote inclusivity in AI practices outwardly, it’s important to promote inclusivity in AI practices internally by partnering with multiple stakeholder groups. For example, Salesforce’s Ethical Use Advisory Council “consists of a diverse group of frontline and executive employees, academics, industry experts, and society leaders.” According to Salesforce, the advisory counsel “ensures that we address the impacts of modern technology collaboratively, consider a wide set of perspectives, and mitigate risk while staying aligned to our commitments.”

Microsoft operationalizes AI through a centralized effort, led by its Office of Responsible AI, Aether committee, and its Responsible AI Strategy in Engineering. “We learned that we needed to create a governance model that was inclusive and encouraged engineers, researchers, and policy practitioners to work shoulder-to-shoulder to uphold our AI principles,” Crampton said. “A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.”

Develop AI for the benefit of society. “What I believe very, very strongly is that any technology that we’re designing should have a positive impact on society,” Croak said. Google, for example, has publicly committed not to design or deploy AI technologies that cause or are likely to cause overall harm; directly facilitate injury to people; gather or use information for surveillance violating internationally accepted norms; or whose purpose contravenes widely accepted principles of international law and human rights.

Design AI systems to be transparent and explainable. It is understandably difficult to trust the results of AI models when transparency is lacking. Designing AI systems to be transparent and explainable helps legal and compliance teams, as well as the business, both gain and foster trust that their AI models are accurate and reliable. IBM has publicly advocated that technology companies “need to be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithms’ recommendations.”

Continuously monitor and test AI models. Machines are continuously learning based on everchanging datasets. Thus, it’s important to continuously monitor and test the data—for example, regularly testing and validating that automated systems used in making employment decisions are not incorporating discrimination or bias into algorithms.

Collaborate with like-minded peers on designing and governing responsible AI models. There are many responsible AI frameworks and groups helping to advance the field of ethical AI for which to turn for guidance. A few examples include the World Economic Forum’s Responsible Use of Technology; the National Institute of Standards and Technology’s “Artificial Intelligence Risk Management Framework; and the U.S. Chamber of Commerce Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation.

ACI will be holding its “AI Law, Ethics and Compliance” national conference on Oct. 31-Nov. 1 in Washington, DC | For more information, and to register, please visit: https://www.americanconference.com/AI-Law/

Written by:

American Conference Institute (ACI)
Contact
more
less

American Conference Institute (ACI) on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide