Video Games, AI, and …the Law?

Sheppard Mullin Richter & Hampton LLP
Contact

Sheppard Mullin Richter & Hampton LLP

Video games have come a long way. They have morphed from simulated games of ping pong to today’s fully-immersive virtual reality games that leverage biometrics and artificial intelligence (AI). While the origins of using AI in games were simple – such as to create more realistic non-player characters  – the use of AI now allows for much more. AI-based tools may be used to outsource quality assurance, gain data-driven insights into players, or to better understand player value to maximize retention and in-game revenue. Now is thus a good time for companies to keep in mind regulatory bodies’ increased focus on the use of AI.

In the US, the FTC has provided guidance around the use of AI in ways that avoid unfair or deceptive trade practices. For video game publishers, as applied to the gaming industry, the FTC’s key considerations (which we also summarized in our sister blog) include:

  • Accuracy. AI components of a game or service should be tested prior to implementation to confirm it works as-intended.
  • Accountability. Companies should think about how the use of AI will impact the end- user. Outside experts may be used to help confirm that data being used is bias-free.
  • Transparency. End-users should be made aware that the company may use AI, it should not be used secretively. Individuals should know what is being collected and how it will be used.
  • Fairness. To further concepts of fairness, the FTC recommends giving people the ability to access and correct information.

State comprehensive privacy laws (such as the forthcoming laws we discussed in our sister blog, namely in California, Colorado, and Virginia) will also impact companies’ use of AI. These laws require companies to provide individuals with opt-out rights regarding AI in automated decision-making and profiling. They also mandate conducting data protection impact assessments for processing activities that pose a heightened risk- such as automated processing. In line with the FTC’s transparency principle, California’s CPRA also requires access requests to include information about the logic and outcome involved in such decision-making processes. NIST (a self-regulating industry body) has also proposed an AI risk management framework.

AI has received similar scrutiny in Europe (discussed in our sister blog), where the focus has been on providing transparency and oversight when using AI. This is particularly true when automated decision-making occurs. Like the states noted above, here a risk-based approach will be needed. At present, an AI Act is being considered, which would take these concerns into account.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Sheppard Mullin Richter & Hampton LLP | Attorney Advertising

Written by:

Sheppard Mullin Richter & Hampton LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Sheppard Mullin Richter & Hampton LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide