Insurance Bad Faith Claims in the Age of AI Jim

by Zelle LLP

Zelle  LLP

Insurance Law360
January 29, 2018

On the evening of Dec. 23, 2016, at seven seconds after 5:49 p.m., the holder of a renter’s policy issued by upstart insurance company Lemonade tapped “submit” on the company’s smartphone app. Just three seconds later, he received a notification that his claim for the value of a stolen parka had been approved, wire transfer instructions for the proper amount had been sent and the claim was closed. The insured was also informed that before approving the claim, Lemonade’s friendly claims-handling bot, “A.I. Jim,” cross-referenced it against the policy and ran 18 algorithms designed to detect fraud.

In a blog entry posted the following week, Lemonade characterized the three-second claim approval as a world record. Others called it a publicity stunt. And a year later, it’s certainly old news in the insurance industry. But there is no dispute that A.I. Jim’s light-speed claim approval illustrates how “insurtech” companies — tech-oriented startups in the insurance sector — are sidestepping traditional insurers by using technology to reach customers, sell insurance products, and process insurance claims.

Insurtech represents a small sliver of the overall insurance industry, but its explosive growth is drawing a great deal of attention and capital investment, as well as a growing slice of market share. The use of algorithms in the insurance industry is nothing new, but their use has been primarily in risk assessment. The rise of insurtech, and the sector’s heavy use of algorithms in the claims-handling process, is raising questions about how traditional insurance law applies (or doesn’t) to this new paradigm in which functions traditionally carried out by human beings are increasingly handled by bots like A.I. Jim and the algorithms that power them.

Take insurance bad faith, for instance. The basic framework of an insurance bad faith claim is that the policy entitles the insured to certain benefits, and those benefits have been unreasonably withheld by the insurer. Such claims can be brought under the common law duty of good faith and fair dealing, but are commonly brought under insurance bad faith statutes that prohibit specific conduct, such as denying a claim without explanation, or without first conducting a reasonable investigation.

The threshold prerequisite for an insurance bad faith claim is that the underlying insurance claim has been denied. When the denial results from a traditional insurer’s human-powered claims examination process, the human beings involved typically leave a trail of emails, reports, and other documents that explain their process. But suppose a claim is denied by a bot rather than a human. What then?

The modern inclination to trust computers to do things right — especially things we don’t understand ourselves — may deter bad faith claims when the denial decision is made by a computer. Indeed, to the casual observer, algorithmic processing of insurance claims might seem like the gold standard in objectivity and even-handedness. But that’s not necessarily the case.

In her book Weapons of Math Destruction, mathematician Cathy O’Neil argues that people are often too willing to trust mathematical models because they believe that math will remove human bias, when in fact, algorithms may only be as even-handed as the people who create them. Media outlets describe this phenomenon as “algorithmic bias,” and many experts believe we are only in the early stages of understanding it. As the concept of algorithmic bias gains acceptance, the popular perception that algorithms are fair and objective will likely erode. And in a marketplace where insurers are often regarded as making money by denying claims, policyholders may suspect that algorithms built by insurers are biased in favor of denying claims.

Algorithmic claims handling also presents practical challenges in a litigation setting. Attorneys defending insurance bad faith claims routinely rely on information gathered and analyzed by human claims examiners to show how the facts of the loss justify a denial of the claim. The humans involved in the process are available to explain and defend their decisions, and can usually point to notes, emails and other documents for support. But when a bot denies a claim, an insurer’s legal team may face the challenge of explaining to a jury how the bot arrived at that decision, and persuading jurors that they should trust the bot’s impartiality. In other words, they may have to explain in laypersons terms how the underlying algorithms work in hopes of persuading human jurors how a computer-generated bot acted in good faith.

As if that prospect isn’t daunting enough, consider the daunting complexity of algorithms. Several experts have opined that the algorithms we rely on in everyday life are growing so complex that even their creators can’t understand them. In a 2016 commentary in Forbes magazine, Kalev Leetaru explained the many ways in which the complexity of algorithms can outpace the forward-thinking abilities of their human creators, leading to unintended, even tragic, outcomes. Given the boundless factual complexity of insurance claims, the likelihood of unintended outcomes in bot-reviewed claims seems great.

Yet another layer of uncertainty is created by the fact that algorithms are not static. One of the great strengths of a good algorithm is its ability to “learn” from its experiences. But what a bot will learn is not always clear — or positive.

Perhaps the most highly publicized example of a bot run amok is “Tay,” a chatbot designed by Microsoft as an experiment in “conversational understanding.” When Microsoft launched Tay on Twitter in March of 2016, it was supposed to learn to engage in “casual and playful conversation” by interacting with other Twitter users. Well learn Tay did, but what it learned was not casual or playful. Within 24 hours, Tay had learned to parrot the racist, anti-Semitic, misogynist rants showered on it by Twitter trolls. Microsoft pulled Tay down, deleted the worst of the tweets and apologized.

Tay is a high-profile example of a flawed bot that was exposed to a wave of negativity in what some believe was a coordinated attack. But that experience has become a cautionary tale of how machine learning can run off the rails. In the context of insurance claims, the risk that an initially fair and impartial bot could develop an unfair bias over time cannot be dismissed. In the blog post announcing A.I. Jim’s world-record claims processing time, Lemonade noted that “A.I. Jim is still learning” under the guidance of “real Jim,” Lemonade’s chief claims officer, Jim Hageman.

Human supervision of a bot’s education is surely well-advised. Insurtech companies that launch a bot and leave it to its own devices may find themselves exposed to bad faith claims because their bot fell in with the wrong crowd. And if their bots apply the bad lessons they have learned too broadly, insurers may find themselves grappling with claims of “institutional bad faith” that implicate their practices and procedures on a broad scale, not just on a claim-by-claim basis. The litigation costs could pale in comparison to the public-relations costs.

Voices in the insurtech industry have played down the risks of bot-based claims handling. But the risks are not merely hypothetical. A cursory (and admittedly unscientific) survey of online ratings for insurtech companies shows that they include numerous complaints about claims denied without explanation, or without investigation. Those are precisely the types of conduct that fall within the model Unfair Claims Settlement Practices Act that has been adopted in one form or another by almost all 50 states.

As noted above, algorithms have been a fixture in risk assessment for a long time, but using them in the claims-handling process poses new risks and challenges, including the risk of bad faith claims that could be very difficult to defend. As insurance companies know better than anyone, identifying risk is the first step in avoiding it. There are indications that the creative minds that hatched the insurtech model are also leading the way in addressing the risk of insurtech bad faith claims. One way to do that is to limit a bot’s authority to approving only clear-cut claims, like the case of the stolen parka, and programming it to route dubious claims into human hands rather than denying them.

Referring questionable claims to real people is already part of the game plan for Lemonade. In the same blog post that trumpeted A.I. Jim’s “world record” claim approval, the company noted that “real Jim” — the company’s flesh-and-blood chief claims officer — “is by far the more experienced claims officer,” and that A.I. Jim “often escalates claims to real Jim. That’s why not all Lemonade claims are settled instantly.” Whether other insurtech companies use the same approach is not clear. But considering insurtech’s creative track record, it’s likely that as new problems and risks become apparent, solutions will follow close behind.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Zelle LLP | Attorney Advertising

Written by:

Zelle  LLP

Zelle LLP on:

Readers' Choice 2017
Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
Sign up using*

Already signed up? Log in here

*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
Privacy Policy (Updated: October 8, 2015):

JD Supra provides users with access to its legal industry publishing services (the "Service") through its website (the "Website") as well as through other sources. Our policies with regard to data collection and use of personal information of users of the Service, regardless of the manner in which users access the Service, and visitors to the Website are set forth in this statement ("Policy"). By using the Service, you signify your acceptance of this Policy.

Information Collection and Use by JD Supra

JD Supra collects users' names, companies, titles, e-mail address and industry. JD Supra also tracks the pages that users visit, logs IP addresses and aggregates non-personally identifiable user data and browser type. This data is gathered using cookies and other technologies.

The information and data collected is used to authenticate users and to send notifications relating to the Service, including email alerts to which users have subscribed; to manage the Service and Website, to improve the Service and to customize the user's experience. This information is also provided to the authors of the content to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

JD Supra does not sell, rent or otherwise provide your details to third parties, other than to the authors of the content on JD Supra.

If you prefer not to enable cookies, you may change your browser settings to disable cookies; however, please note that rejecting cookies while visiting the Website may result in certain parts of the Website not operating correctly or as efficiently as if cookies were allowed.

Email Choice/Opt-out

Users who opt in to receive emails may choose to no longer receive e-mail updates and newsletters by selecting the "opt-out of future email" option in the email they receive from JD Supra or in their JD Supra account management screen.


JD Supra takes reasonable precautions to insure that user information is kept private. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. However, please note that no method of transmitting or storing data is completely secure and we cannot guarantee the security of user information. Unauthorized entry or use, hardware or software failure, and other factors may compromise the security of user information at any time.

If you have reason to believe that your interaction with us is no longer secure, you must immediately notify us of the problem by contacting us at In the unlikely event that we believe that the security of your user information in our possession or control may have been compromised, we may seek to notify you of that development and, if so, will endeavor to do so as promptly as practicable under the circumstances.

Sharing and Disclosure of Information JD Supra Collects

Except as otherwise described in this privacy statement, JD Supra will not disclose personal information to any third party unless we believe that disclosure is necessary to: (1) comply with applicable laws; (2) respond to governmental inquiries or requests; (3) comply with valid legal process; (4) protect the rights, privacy, safety or property of JD Supra, users of the Service, Website visitors or the public; (5) permit us to pursue available remedies or limit the damages that we may sustain; and (6) enforce our Terms & Conditions of Use.

In the event there is a change in the corporate structure of JD Supra such as, but not limited to, merger, consolidation, sale, liquidation or transfer of substantial assets, JD Supra may, in its sole discretion, transfer, sell or assign information collected on and through the Service to one or more affiliated or unaffiliated third parties.

Links to Other Websites

This Website and the Service may contain links to other websites. The operator of such other websites may collect information about you, including through cookies or other technologies. If you are using the Service through the Website and link to another site, you will leave the Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We shall have no responsibility or liability for your visitation to, and the data collection and use practices of, such other sites. This Policy applies solely to the information collected in connection with your use of this Website and does not apply to any practices conducted offline or in connection with any other websites.

Changes in Our Privacy Policy

We reserve the right to change this Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our privacy policy will become effective upon posting of the revised policy on the Website. By continuing to use the Service or Website following such changes, you will be deemed to have agreed to such changes. If you do not agree with the terms of this Policy, as it may be amended from time to time, in whole or part, please do not continue using the Service or the Website.

Contacting JD Supra

If you have any questions about this privacy statement, the practices of this site, your dealings with this Web site, or if you would like to change any of the information you have provided to us, please contact us at:

- hide
*With LinkedIn, you don't need to create a separate login to manage your free JD Supra account, and we can make suggestions based on your needs and interests. We will not post anything on LinkedIn in your name. Or, sign up using your email address.