Deeper Dive: The Use of AI In Interactive Entertainment and Videogames

BakerHostetler
Contact

BakerHostetler

Artificial Intelligence (AI) has been an integrated feature in videogames and other interactive entertainment for some time. For example, a range of AI, from rudimentary algorithms to sophisticated machine learning technology, can be used to do the following: improve logic and actions by nonplayable characters (NPCs), create “for you” features for in-game purchases, enforce fair play mechanics (such as automatic “nerfing” and level- and experience-appropriate world and enemy generation), and monitor player behavior (such as monitoring and flagging person-to-person (P2P) chats and gameplay trolling for internal player disciplinary ranking and enforcement).

With the rise of AI as a hot topic across industries, certain legal considerations are likely to be aggravated by the buzzword zeitgeist. Our recently established Interactive Entertainment and Videogames practice includes gamer attorneys who can guide videogame developers and publishers through a myriad of issues, including:

Assessing Data Flows. Understanding what data is being processed through AI tools, including personal information, is key to assessing the legal and compliance risks. This includes identifying the source of the data; where the data is being stored; what is accessible by AI tools; whether de-identification, pseudonymization, anonymization or aggregation is being performed on personal information, at what point in the flow, and by who; and whether the data will be subject to derivative processing. Gathering this key information is imperative to determining whether there are potential IP, privacy, data security and other legal issues at play.

Profiling. Should the AI tools be used for the purposes of profiling players, whether for advertising or for enforcement of the game platform’s terms, companies should consider whether such profiling predicts or infers a player’s economic situation, personal preferences, interests, location or movements, among other behavior. This is especially important if the player is under 18, considering the enactment of certain state age-appropriate design statutes and whether the game in question may be considered a “social media platform” under certain state laws.

Automated Decision-Making. The question of what is considered “automated decision-making,” which can be performed by certain AI tools, subject to harsh legal obligations and restrictions, continues to be debated, as the term is defined differently across various laws. Generally, definitions include the element of these tools producing a legal or significantly similar effect. The interpretation of what is an effect that is significantly similar to a legal one (such as one affecting whether an individual can buy a house or get a job), has yet to be substantially tested. Interpretation of a “significantly similar effect” could range from things like blocking a player’s chat by blacklisting certain keywords to continuing to charge a player for a subscription after blocking them from the game for bad behavior. As the laws also differ on the extent to which human intervention should be involved, these practices should be carefully reviewed.

Contract Negotiation and Vendor Management. Performing due diligence on vendors that provide AI tools is a must, not just to determine data flows and ownership but also to better understand the actual technology and the purposes of data processing. Vendor management through contractual representations on AI use is increasing, sometimes outright prohibiting vendors from using AI tools unless there is written authorization. A balance should be struck to ensure game maintenance and availability, which may lead to a strategy for reducing off-contract negotiation and instead explicitly outlining data processing and AI use parameters explicitly in the agreement.

Developing Internal AI Policies and Training. Despite any other company practice with regard to AI, governance through implementing internal policies on what AI tools can or should be used, for what purposes, and with what data, drives efficient product decision-making and can encourage innovation while mitigating legal risks. Assigning an AI task force and providing training to employees should be part of such policies to the extent possible. It is important to keep the best interests of players, including their experience and gameplay, at the heart of these polices.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide