AI Insurance Company Faces Class Action for Use of Biometric Data

Carlton Fields
Contact

Carlton Fields

After a tweeting mishap, Lemonade Inc., an AI-based insurance company, faces a class action for allegedly violating New York laws against the use of biometric data without consent by using facial recognition technology to analyze videos submitted in the claims process.

Artificial intelligence and big data are key parts of Lemonade’s appeal to consumers and investors, but those same tools provoked concern on social media when Lemonade mentioned its use of facial recognition to analyze videos. In a series of now-deleted tweets, Lemonade stated that it gathers more than 1,600 “data points” about its customers, which is “100x more data than traditional insurance carriers,” to be analyzed by a “charming artificial intelligence bot” that then crafts and quotes insurance. The data points include videos made and submitted by customers. Lemonade’s AI bot analyzes the videos for fraud and supposedly can detect “non-verbal cues” that traditional insurers cannot. According to the class action complaint, Lemonade also tweeted that this process “ultimately helps … lower [its] loss ratios” and its “overall operating costs.”

These tweets raised concerns with Twitter users regarding the collection of facial biometric data, including the possibility for discrimination based on race and other traits. In response, Lemonade tweeted that it did not use and is not “trying to build AI that uses physical or personal features to deny claims.” Rather, Lemonade explained that it asks for a video during the claims process because “it’s better for [its] customers” and that the “term non-verbal cues was a bad choice of words to describe the facial recognition technology [it] us[es] to flag claims submitted by the same person under different identities.”

In August 2021, plaintiff Mark Pruden filed a putative class action in the Southern District of New York alleging that Lemonade violated New York statutory and common law by “collecting, storing, analyzing, or otherwise using biometric data of thousands of its customers without their authorization or consent,” and contrary to its privacy policy. The claims include violation of New York’s Uniform Deceptive Trade Practices Act, breach of contract, breach of implied contract, and unjust enrichment.

As of December 2021, the case is stayed while the parties explore settlement negotiations.

Biometric data continues to be a hot topic among consumers, regulators, and plaintiffs’ lawyers, especially amid growing concern by consumers about how and why their biometric data is collected. Companies should be careful to make clear what they do, obtain unambiguous consent from their customers, and take caution when posting on social media.

Written by:

Carlton Fields
Contact
more
less

Carlton Fields on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide