Everything in Moderation: Artificial Intelligence and Social Media Content Review

Pillsbury - Internet & Social Media Law Blog
Contact

Pillsbury - Internet & Social Media Law Blog

Interactive online platforms have become an integral part of our daily lives. While user-generated content, free from traditional editorial constraints, has spurred vibrant online communications, improved business processes and expanded access to information, it has also raised complex questions regarding how to moderate harmful online content. As the volume of user-generated content continues to grow, it has become increasingly difficult for internet and social media companies to keep pace with the moderation needs of the information posted on their platforms. Content moderation measures supported by artificial intelligence (AI) have emerged as important tools to address this challenge.

Whether you are managing a social media platform or an e-commerce site, minimizing harmful content is critical to the user experience. Such harmful content can include everything from posts promoting violence to child abuse. In fact, the range and scope of potential harmful content has proven too broad for human moderators to comprehensively review. AI systems, designed to mirror the way humans think and process information, may be able to improve the speed and accuracy of this process. AI technology can take large data sets and teach machines to identify patterns or make predictions about certain inputs. Ultimately, this capability allows computers to recognize and filter certain words or images with more efficiency than humans can process this information. As an added benefit, this reduces or could potentially eliminate the need for human moderators to be directly exposed to harmful content.

While AI systems are promising, they are not without their own set of challenges. By some estimates, there are 2.5 quintillion bytes of data created each day. As such, while AI offers a way to more efficiently process large amounts of data, the volume of content at issue is now so vast that it has become critical that AI models perform with both speed and accuracy. And achieving optimal accuracy requires an AI model to not only be based on accurate data and imagery but also be able to appreciate nuances in the content reviewed to distinguish satire from disinformation. Further, questions have been raised regarding whether these models remove the inevitable biases of human content moderators, or if the AI models themselves actually introduce, entrench or amplify biases against certain types of users. One study, for example, found that AI models trained to process hate speech online were 1.5 times more likely to identify tweets as offensive or hateful when written by African-American users.

This tension demonstrates the difficult balance between designing models to address human inefficiencies and root out human error in content moderation while ensuring new systematic issues are not introduced into the models themselves. In fact, U.S. policymakers have conducted numerous hearings and floated legislative proposals to address concerns regarding bias within AI systems and the unintentional discrimination that could result from using such systems.

AI systems undeniably offer online platforms enhanced capabilities to effectively moderate user-generated content, but they present their own set of challenges that must be considered as these systems are designed and deployed as moderation tools.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Pillsbury - Internet & Social Media Law Blog | Attorney Advertising

Written by:

Pillsbury - Internet & Social Media Law Blog
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Pillsbury - Internet & Social Media Law Blog on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide