With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory. Yet, to date, very few details have emerged regarding those teams—Who are the members? What standards are applied to creation and implementation of AI? Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board. Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly. Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.
Diversity of Backgrounds
Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties. A sampling of members includes the following individuals:
Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same. Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years. Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans. If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.
In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:
DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon. In furtherance of its key principles, DeepMind seeks to answer key questions:
Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines. In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft. Seven pillars form the basis of this partnership:
While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice. It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.