US: NAIC Developing Best Practices for Regulatory Review of Predictive Models and Analytics in Rate Filings

Hogan Lovells
Contact

As computing power grows exponentially, it has opened the actuarial modeling world to new and sophisticated forms of data collection and analysis, resulting in insurance companies seeking increased “predictiveness” of potential losses by employing ever more complex modeling methods in establishing, and justifying, premium rates.

Predictive analytics, which involve a number of techniques, including data mining, statistical modeling and machine learning, in its forecasts, allows insurers to use “big data” to more precisely forecast future events.

Why does this matter?

These evolving techniques have made it increasingly challenging for insurance regulators to evaluate filed rating plans that incorporate complex predictive models. Compounding the issue is the fact that many state insurance departments do not have sufficient in-house actuarial expertise to review such filings.

While insurance regulators clearly recognize the great potential of emerging technology to positively affect both insurers and consumers, regulators have also expressed certain concerns with how insurers might use the trove of new data sources and predictive modeling available to the market. These include:

  • complexity and volume of data may present hurdles for smaller-sized insurers;
  • lack of transparency and potential for bias in algorithms used to synthesize big data;
  • highly individualized rates that lose the benefit of risk pooling;
  • cyber-threats to stored data;
  • collection of information sensitive to consumers’ privacy; and
  • potential for discriminatory practices.

One example of discrimination in rate filings that has, in the last several years, concerned regulators is the practice known as “price optimization.” While it can take various forms, price optimization commonly refers to the practice of varying rates based on factors other than the risk of loss, such as the likelihood that policyholders will renew their policies and the willingness of certain policyholders to pay higher premiums than other policyholders. Many regulators have determined that the use of price optimization results in rates that are unfairly discriminatory and in violation of applicable insurance laws. More recently, regulators have been grappling with the question of whether states’ insurance discrimination laws, which prohibit discrimination based on race, religion or national origin, could or should cover instances of disparate impact on underserved or protected classes of consumers emanating from the use of predictive modeling and analytics.

“Best Practices” being developed

To address this growing regulatory knowledge gap, the National Association of Insurance Commissioners (NAIC) has tasked its Casualty Actuarial and Statistical Task Force (CASTF) with identifying “best practices” to serve as guidance to state insurance departments (and insurers) in their review of complex models underlying rating plans.

According to an October 3, 2018 draft released by the CASTF, the “best practices” are intended to, among other things:

  • provide guidance to state insurance regulators in their essential and authoritative role over the rating plans in their respective states;
  • identify elements of a model that may influence the regulatory review as to whether modeled rates are appropriately justified and compliant with state laws;
  • aid speed to market and competitiveness of the state marketplace; and
  • provide a framework for states to share knowledge and resources to facilitate the technical review of complex

The “best practices,” which will ultimately be included in the NAIC Product Filing Review Handbook, are being drafted in the form of guidance designed to break down the review of complex predictive models into various “considerations,” comment on what is important about each consideration, and provide insight as to when the consideration would become an issue the regulator needs to be aware of or explore further. For example, the current outline of the “considerations” section of the “best practices,” which at the moment are simply placeholders, calls for, among other items, a review of “relevance of variables/relationship to risk of loss,” “massaging data, model validation and goodness of fit measures” and “accurate translation of model into a rating plan.” On an October 9, 2018 conference call, the CASTF indicated its intention to circulate a revised draft of the “best practices” in advance of the NAIC national meeting (San Francisco, November 15-18, 2018) that will flesh out the narrative portions of the “considerations” as well as the placeholders for “Confidentiality” and “Complex Predictive Models – Policy Issues.”

While the focus of the “best practices” is on private passenger auto and homeowners’ insurance rate filings, the NAIC has tasked other committees and working groups with reviewing the impact of big data more generally. For example, the NAIC’s Big Data Working Group is currently focused on the use of data for accelerated underwriting in life insurance.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide