As accelerated underwriting (AU) and artificial intelligence (AI) begin to turn life underwriting upside down, several NAIC working groups are seeking to bring order to the disruption: the Big Data (EX) Working Group (“Big Data WG”), the Innovation and Technology (EX) Task Force (“Innovation TF”), the Accelerated Underwriting (A) Working Group (“AU WG”), and the Artificial Intelligence (EX) Working Group (“AI WG”). Discussed below are some of the key questions they have been considering that potentially have major implications for consumers and the insurance industry.
Who Is Subject to Regulation?
With the flood of newly available consumer data, third-party vendors have entered the fray of life insurance underwriting. By rearranging the data and developing new models, these vendors offer to reduce the time taken to underwrite a policy. Consumer groups frenetically complain that unregulated third-party vendors are not accountable if they provide an insurer with data points or models that contain inaccurate information or prohibited factors that lead to unfair discrimination. At the August 13 NAIC special session on race, Birny Birnbaum of the Center for Economic Justice urged regulators to establish oversight for unregulated vendors of data and models.
Acknowledging these concerns, the AI WG incorporated into its AI Principles a definition of “AI actors” that includes “third parties such as rating, data providers and advisory organizations” who play an active role in the AI system life cycle. By so doing, regulators have made clear their expectation that third-party vendors “promote, consider, monitor and uphold” fair, ethical, accountable, compliant, transparent, secure, safe, and robust AI principles even if they are outside the regulatory reach of the state insurance departments. The AI Principles were adopted at the August 14 Joint Meeting of the NAIC’s Executive Committee and Plenary.
What Data Should Be Used?
- Is the Data Accurate?
Because the new sources of non-traditional data are often not consumer reporting agencies and are therefore not subject to the Fair Credit Reporting Act, at the August 7 Innovation TF meeting, regulators and consumer groups questioned the accuracy of the disjointed array of data that is used in AU. To assure the accuracy of non-traditional data, at its July 31 meeting, the AU WG considered:
- Reinforcing to insurers that they retain the sole responsibility for the collection, scrutiny, and analysis of data to ensure it is reliable, even if it is provided by a third-party vendor.
- Banning the use of non-FCRA data or requiring FCRA-type protections on non-FCRA data, including consumer rights to access and correct such data.
- Do the Data Points Used Reflect Causation or Merely Correlation?
To the extent that behavioral data points, such as a person’s gym membership, shopping habits, wearable device data, magazine subscriptions, voting history, and web browsing history, are used within AU models, regulators and consumer groups have expressed concerns that such data points:
- Not be unhinged, but have a rational and understandable relation to risk.
- Reflect the consumer’s reality. For example, the fact that a lower-income individual cannot afford a monthly gym membership does not automatically mean that person lives an unhealthy lifestyle warranting a higher risk class.
- Not be littered with unrelated information, but are only that of the individual. For example, a person could purchase unhealthy products at a grocery store for someone else’s consumption.
Presenters at the August 4 Big Data WG meeting urged regulators to “dig deeper” into what an insurer’s model is trying to achieve, why each variable is important, and “what aspect of the real world makes the correlation come about.”
- Should Credit Scores Be Allowed?
Credit scores are an increasingly messy factor in underwriting “as the distributions of credit scores vary significantly among ethnic groups.” At the NAIC special session on race, regulators discussed the historical bias imbedded in credit scores and the potential discriminatory impact of factors linked to economics. During its July 31 meeting, the AU WG warned that credit scores should not be used in isolation; instead, checks and balances must be employed to protect against discrimination.
Are Consumers Adequately Protected?
- What Do Consumers Know and Did They Consent?
Regulators fear consumers are unaware or confused about the amount and extent of their data being collected or how it is being used. Regulators and consumer representatives are considering requiring insurers to:
- Obtain consumers’ consent.
- Disclose the information used in underwriting.
- Test input data for accuracy and inherent bias.
Additionally, the AU WG’s work product will seek to address whether:
- Consumers understand what information can be collected on them and how it can be used.
- The results are transparent to consumers.
- Do the Data Points or Models Used Discriminate?
To confront the issue of whether data points or models result in discrimination:
- After its June 30 meeting, the AI WG included within its AI Principles “avoiding proxy discrimination” due to regulatory concern that some data points such as credit score, education, occupation, and criminal history used in a model may result in unfair discrimination.
- During its July 31 meeting, the AU WG discussed the need for insurers to test their models and ensure the results are not skewed but are reliable and unbiased. This testing should occur during development, periodically, and on all future generations of an AU program. The AU WG also posited that insurers should document their AU program testing and monitoring and warned that AU programs will be challenged in upcoming market conduct exams.
- Also at its July 31 meeting, the AU WG stressed the importance of multiple departments, including IT, internal audit, actuarial, and legal, being able to explain the data points used and how the model works, not just those that run the model.
Do Regulators Have the Tools to Review the Models?
Regulators acknowledge that their review of complex models becomes more difficult if:
- There is a lack of transparency, particularly if the models are a “black box” because it is not clearly explainable how a given rating or score resulted from the data used by the model. This issue is exacerbated if the models evolve over time through machine learning.
- There is a lack of regulatory expertise and resources to review complex models properly. Regulators have discussed the development of an NAIC resource to assist their review of complex models, particularly for property and casualty rate review.
- Companies rely on third-party vendors, who are not subject to regulation, to provide data or develop models and such vendors restrict insurers from sharing information.
At the August 8 Big Data WG meeting, presenters from the Casualty Actuarial and Statistical Task Force discussed that the regulatory review of complex models should:
- Ensure compliance with rating laws; rates that are not excessive, inadequate, or unfairly discriminatory.
- Review all aspects of the model: data, assumptions, adjustments, variables, input, and resulting output.
- Evaluate how the model interacts with and improves the rating plan.
- Enable competition and innovation.
Additionally, presenters at the August 7 Innovation TF meeting suggested that regulatory review of models should take place before the models are in place, especially if the models come from a third-party vendor.
*With assistance from Facundo Scialpi, a student at the University of Miami School of Law.