Artificial Intelligence - There's Nothing Fake About It

Bricker Graydon LLP
Contact

Bricker Graydon LLP

Artificial Intelligence (AI) has become ubiquitous in today’s corporate lexicon. And while much has been said, and written, about AI, the question still remains: what, exactly, is AI? Or, more aptly for this discussion, what about it will impact the insurance industry?

As to the former, there are no shortage of definitions of AI. Alan Turing (“the father of computer science”) described it as “systems that act like humans.”1 While many have built on that early definition, there is a simplicity about it that makes it easy to understand when trying to evaluate just how disruptive AI will be in the insurance industry. For our purposes, we’re going to further define the word “systems” in Mr. Turing’s definition to mean both massively capable computers and the trove of data that they have access to.

As to the latter – the impact that AI will have on the insurance industry – we’ll stick with a word that we used earlier: disruptive. Whether good or bad remains to be seen, but AI, and more specifically, Generative AI, will undoubtedly revolutionize insurance operations, risk management, underwriting, and the customer/carrier experience more generally.

Generative AI

Generative (GAI), at its core, is a deep-learning AI model that is capable of generating text, images, and other content based on the data it was trained on.2 It learns by identifying patterns and structures within its existing data to create new and “original” content (air quotes to acknowledge the considerable dispute as to whether the content is truly “original” given how GAI is trained on volumes of already existing data).3 You’ve likely already seen, or maybe tinkered with, GAI, as it has exploded into the zeitgeist with tools like ChatGPT, Dall-E, and Bard.

Big Data and AI: A Perfect Match

Before getting to how AI and GAI will impact the insurance industry, it is worth stepping back for a moment to reintroduce a term that has been used (perhaps misused) with some regularity: big data. Though most understand it in context, the term “big data” has become somewhat fungible since its inception in the early 1990’s. In essence, “big data” is the output produced by a society that has seen rapid advances in technology and similarly rapid advances in overall interconnectedness. Think of the speed with which a social media post may make the rounds online and the granularity of the information known about us that is publicly available. Buried in all of that data lies patterns, trends, and associations that tell a story.

As it happens, AI requires data to function. Massive amounts of data. In a way, AI allows big data to maximize its potential. Collected and stored, big data has potential energy. It’s ready to be released but until acted upon, it sits. When used by AI, it transforms into kinetic energy. It’s moving and doing work.

Insurance companies have long had massive amounts of data, and it’s growing exponentially. And not just high level data, but the type of granular data that can help transform how consumers are viewed and how risk is identified (and priced). Whether the data is in the form of social media posts, telematics, news, or weather forecasts, a mass of data no longer has to be intimidating to the end user or overwhelming to existing statistical models. AI can harness it, make sense of it, and it can analyze it with tremendous speed and, in most cases, accuracy. What was once overwhelming is now both achievable and scalable.

AI and Insurance:

The proliferation of AI has transformed the way organizations operate and make decisions, and this is unsurprising given the rise in the sheer volume of data generated on a daily basis and the attendant increase in computing capabilities, generally. Put differently, the environment is (and has been) ripe for growth and will likely continue to stay that way for quite some time.

With the introduction of GAI, this transformation has only hastened. And, as GAI models have become more sophisticated and capable, their outputs have turned increasingly realistic and personalized.

The insurance industry is, of course, not immune to any of this advancement. In fact, in many ways, it may be better positioned than most because of the very data that it has long had access to. Case in point, underwriting. The part of the process where it is critical to turn data into actionable insight.

Field underwriting, or, more specifically, the lack of field underwriting, could be deeply impacted by GAI platforms. Bob Gaydos, founder and CEO of Pendella, argues in Digital Insurance that proper use of ChatGPT, in particular, in life insurance underwriting could create an environment where better virtual “field” underwriting could better predict the proper coverage class for individuals.4 That could more accurately price policies and open a market with slim margins to more middle-income consumers. Additionally, he argues, agents can better market to typically cynical older-millennials.5

Beyond underwriting, insurers are using chatbots to improve customer experience. With 24/7 availability, chatbots can provide high-level advice, verify billing information, and kick-start a claims process. Chatbots, of course, are not necessarily a panacea for the user experience. Some are rule-based and can only execute certain processes based on a set of pre-defined rules. They’re built on the back-end and they’re good at resolving simple issues. AI-driven chatbots are not tethered to pre-defined rules. Rather, they are trained, and they learn. They learn to recognize input in an attempt to understand a user’s intent. In either case, like many GAI tools, they are only as good as their data and they, too, can be prone to inaccuracies.

The claims process itself has also seen early benefits with various AI-driven tools offering assessments of damage and predictions of repair costs. In fact, there are no shortage of examples available showing how the great majority, if not entirety, of a claims process can be automated. From the collection of data through an onboard telematics device to communicating with an insured through a chatbot, and finally, concluding with a decision on how a claim may or may not proceed.

Fear, Uncertainty and Regulation:

While successful use cases are plentiful, the wide-scale integration of AI is not without its drawbacks and risk. Recall that many AI tools, and most GAI tools, are attempting to mimic human problem-solving capabilities. And while they are good at doing so, and getting better, such tools are limited to what they have been trained to do and further limited by the data that they have trained on. In other words, if a predictive model leans on a dataset that reflects some past discrimination, or bias, there is a considerable potential that the model may perpetuate that same discrimination or bias moving forward. Of course this can happen in a purely human-led decision-making model, but the scale is considerably smaller.

This type of concern was a through-line in the testimony of several witnesses who appeared before the US Senate Judiciary Subcommittee on Privacy & Technology in May 2023.6 The hearing, called “Oversight of AI: Rules for Artificial Intelligence”, featured testimony from Sam Altman, the CEO of OpenAI, Christina Montgomery, the chief privacy officer at IBM, and Gary Marcus, a professor emeritus at New York University.

The hearing itself began with Chair Blumenthal playing a statement that was written and voiced by AI that mimicked his own writing and voice. Colloquially known as a “deepfake.” Over the course of several hours, the Subcommittee heard testimony relating to trends, implications, and risks associated with AI, all with an eye toward assessing how the government can tackle the unenviable task of offering a regulatory framework for AI. Whether through a “scorecard,” “nutrition label,” or “precision regulation,” the point was clear that some framework is necessary and Congress likely needs to act sooner rather than later, lest it meet a similar fate as efforts to regulate social media.

Beyond Congress, the National Association of Insurance Commissioners (NAIC) is already taking steps to understand and address the rise of AI in business and how that could impact the state-based regulatory environment. NAIC has already formed both the Innovation Cybersecurity and Technology (H) Committee, as well as the Big Data and Artificial Intelligence (H) Working Group (BDAI Working Group). With the latter being tasked with studying the development of artificial intelligence, its use in the insurance sector, and its impact on consumer protection and privacy, marketplace dynamics, and the state-based insurance regulatory framework.

In 2021, the BDAI Working Group began surveying insurers to learn how AI and machine learning techniques are currently being used and what governance and risk management controls are in place. A report of the aggregate responses from private passenger auto writers was issued in December 2022. The survey of homeowners' insurers was completed in late 2022 and the survey of life insurers expected to be issued in Q2 of 2023.7

At the most recent Spring 2023 US NAIC meeting, the BDAI Working Group heard from consumer representatives and trade associations on the draft Model and Data Regulatory Questions, a set of questions about models and data used by insurance companies. The commentary covered considerable ground but included pointed discussions on the legal authority to ask certain questions, the overall scope of the questions, and how much data (including third-party vendor data) is required to be disclosed.

Regulatory Considerations:

Like many aspects of insurance regulation, big data and the use thereof will be front and center in the debate over how to regulate AI in insurance. Specifically, lawmakers and regulators will need to consider:

  • How to find the balance between encouraging innovation while addressing risk.
  • Whether existing frameworks for responsibility and liability (e.g., Section 230 of the Communications Decency Act) have applicability.
  • Identifying and prioritizing harms and rights infringement. (e.g., suppression of speech, addressing bias in algorithms and potential infringement of intellectual property)
  • How to account for existing (and non-existing) international, national, state, and regulatory data privacy regimes and how they factor into the wide-scale collection and use of data, particularly in AI tools.
  • Where and how federal oversight might work in a fast-moving environment.

Conclusion:

AI is here and it’s not going anywhere as its uses are far reaching. While the fear that AI will “overtake human thinking”8 and could potentially replace thousands of human workers is real, we must be willing to confront the technology and regulate it accordingly. Legislators and regulators must be willing to learn about these issues, discuss them openly, be fearless and thoughtful in their actions, and be willing to adapt as things change.

When asked the question, “how should lawmakers regulate AI in insurance?” ChatGPT responded with the following conclusion:

It is important for lawmakers to work collaboratively with industry stakeholders, consumer advocates, and experts to strike an appropriate balance in regulating AI in insurance. The aim should be to foster innovation, protect consumer interests, and ensure fair and transparent practices in the insurance sector.

That’s pretty good advice.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Bricker Graydon LLP | Attorney Advertising

Written by:

Bricker Graydon LLP
Contact
more
less

Bricker Graydon LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide