Lessons from the Axon AI Ethics Board Resignation

Fenwick & West LLP
Contact

Fenwick & West LLP

[co-author: Sydney Veatch]

A few short days after 19 children and two teachers were killed in Uvalde, Texas, Axon Enterprise, a leading provider of law enforcement technology solutions including the Taser, announced plans to develop “non-lethal armed drones,” dubbed Taser Drones, which Axon claimed could be installed in schools to combat mass shootings. However, Axon failed to consult with its AI Ethics Board before announcing the development of the Taser Drone, prompting the resignation of nine of the 13 board members. In their joint resignation letter, the nine members noted that the board had extensive discussions in the past with Axon about similar technology and had voted against Axon moving forward, even on limited terms.

As AI capabilities continue to expand, the development of law enforcement–related and weaponized AI has been and will likely remain controversial. Axon aimed to redress this controversy by establishing its own AI ethics board. Formed in 2018, the board drew the attention of over 40 civil rights groups who urged it to prohibit certain capabilities, such as real-time facial recognition, from being developed and deployed. Since that time, the board has published three annual reports detailing its recommendations to Axon on other prominent issues. It is unclear why Axon did not continue to consult with its AI Ethics Board before announcing development of the Taser Drone, especially given past board reports indicating that Axon had been open to the board’s suggestions, even when the recommendations were negative regarding proposed developments. Shortly after the resignations, and considerable negative press, Axon announced it was pausing the development of the Taser Drone and asserted that the original announcement was intended to “initiate a conversation of a potential solution” and not “an actual launch timeline.”

These resignations and the public relations backlash should serve as a reminder to all companies with AI ethics boards that establishing such a board is only the first step; the company must have processes in place to consult with its AI ethics board ahead of important product decisions, as well as giving serious weight to the board’s recommendations. The purpose of an AI ethics board is not to fall in line and justify the actions of the company, it is to challenge the company and push for truly ethical product development. A recent example of a company taking ethical recommendations seriously comes from Microsoft, which just retired its facial analysis capabilities that purported to infer emotional states and identify attributes such as gender and age in order to meet the requirements of its newly released Responsible AI Standard.

The larger takeaway from this story is that companies addressing AI risk issues (among other concerns) must not only establish appropriate policies and practices, but also use them. Whether addressing privacy, trade secrets, HR concerns or AI risk, having a policy and not adhering to it runs counter to the principles that motivated creating the policy in the first place.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Fenwick & West LLP | Attorney Advertising

Written by:

Fenwick & West LLP
Contact
more
less

Fenwick & West LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide