Insurers’ Use of Artificial Intelligence: Opportunities and Challenges

Faegre Drinker Biddle & Reath LLP

Faegre Drinker Biddle & Reath LLP

Like most financial services industries, insurance companies and regulators have in recent years increasingly utilized algorithms, machine learning and artificial intelligence in their day-to-day operations. Indeed, a 2019 study showed that more than half of property/casualty insurers and nearly forty percent of life carriers had adopted predictive analytics programs that were integrated with the companies’ core systems, while another forty percent of life insurers had plans to develop such programs.

Some carriers use machine learning techniques to analyze a wider array of information about their insureds and prospective insureds, including social media posts, online reviews and government filings. With these AI-assisted risk assessments, insurers may be more able to customize insurance policies to better meet an insured’s needs and/or improve the insurers’ ability to assess the risk of a particular applicant. Insurance companies have also begun using algorithms to detect patterns of fraudulent behavior that cannot be detected by devices such as wearable technologies alone. These kinds of innovations lower the overall level of risk that insurers face, resulting in cheaper premiums for insureds as the algorithms become more effective at individualization and fraud prevention. 

AI systems have also been employed to directly assist consumers. Chatbots, for example, have become ubiquitous online tools that help answer insureds’ questions and/or resolve insureds’ issues, reducing the need for a human customer service agent. 

But adoption of these techniques has created new challenges for regulators, some of whom may be unfamiliar with performing deep technological examinations of algorithms that are often used in underwriting. It is thus more difficult for regulators to assess whether carriers are complying with anti-discrimination principles. 

That is not to say that regulators have not responded. The NAIC’s Big Data and Artificial Intelligence (H) Working Group is focused on assessing the tools that regulators need to monitor the insurance marketplace and recommend the development of additional tools to improve oversight. The Working Group also created an AI-based model to help identify insurers at greater financial risk. This model is being used to improve the accuracy of the NAIC’s Life Scoring System for solvency monitoring. In 2021, the Federal Trade Commission issued guidance on how to fairly and equitably adopt AI tools and announced that it was considering rules to ensure that algorithmic decision-making does not result in unlawful discrimination. In the same year, the Equal Employment Opportunity Commission launched an initiative to ensure that AI does not exacerbate civil rights issues.

As with all issues that involve new and growing technologies, this landscape will continue to change by the day, and insurers will surely begin to develop new tools that may create new and unforeseen challenges for regulators, at least in the short run.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Faegre Drinker Biddle & Reath LLP | Attorney Advertising

Written by:

Faegre Drinker Biddle & Reath LLP

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.