NAIC Adopts Model Bulletin on Artificial Intelligence

BakerHostetler

States Continue to Develop Law and Guidance in This Area

Key Takeaways:

  • NAIC Model Bulletin on AI Adoption: NAIC adopts a model bulletin on AI use in insurance, providing guidance to state regulators. The bulletin calls on insurers to implement specific controls, emphasizing responsible AI use and adherence to state laws on unfair trade practices and claims settlement.
  • Standards for AI Programs: Model bulletin outlines standards for insurers' AI programs, including validation, testing, and retesting of AI systems.
  • Statewide AI Regulations: Various states, including Colorado and California, are adopting regulations and guidance addressing AI in insurance.

On December 4, the National Association of Insurance Commissioners unanimously adopted a model bulletin on the use of artificial intelligence in insurance. The model bulletin is intended for use by state insurance regulators to establish expectations for how insurers will develop and use AI technologies in a manner consistent with state law, including laws addressing unfair trade practices and unfair claims settlement practices. The model bulletin will be effective in any particular state only if it is adopted by that state and would apply to any insurer holding a certificate of authority to do business in the state.

The model bulletin is unique insofar as it calls on insurers to implement a number of specific controls to mitigate the risk that consumers will be adversely affected by the use of AI in a manner that violates state law. Specifically, the model bulletin states that insurers are expected to develop and maintain a written program for the responsible use of AI systems. The model bulletin also encourages insurers to use verification and testing methods “to identify errors and bias” and the potential for unfair discrimination in predictive models and other AI systems.

Standards set forth by the model bulletin for a written program addressing the use of AI include the following:

  • The program should address governance, risk management controls and internal audit functions.
  • The program should vest responsibility for oversight of the program with senior management accountable to the insurer’s board or an appropriate committee of the board.
  • The program should be tailored to and proportionate with the insurer’s use of AI, taking into account the degree of potential harm to consumers that could result.

The model bulletin states that the risk management controls established by an insurer’s AI program should include, among other things, standards for validating, testing and retesting AI systems as necessary to assess their outputs and standards for evaluating the suitability of data used to train, validate and audit such systems. It also states that the program should address processes for assessing data and AI systems provided by third parties.

The model bulletin arrives at a time when more and more states are adopting standards regarding the use of AI in insurance, including the following:

  • A regulation adopted by the Colorado Division of Insurance in September 2023 requiring life insurers to using external consumer data and information sources (ECDIS) to implement a governance and risk management framework design to prevent unfair discrimination and a proposed regulation requiring life insurers using ECDIS to test for unfair discrimination and disparate impact by race and ethnicity.
  • Draft regulations issued on November 27, 2023 by the California Privacy Protection Agency (CPPA) regarding automated decision making technology (ADMT) provide that businesses using ADMT must provide California residents with notices of a right to opt out of that use where such use involves personal information, and a right to access information regarding the use of ADMT prior to uses of any ADMT. These may align with other state privacy laws, including the Colorado Privacy Act (CPA) [link to privacy law] that provides a right to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.
  • A bulletin issued by the California Insurance Commissioner on June 30, 2022 expressing concern that the use of AI technologies and “Big Data” by insurers could lead to unfair discrimination and instructing insurers to review their practices in this area and a similar bulletin issued by the Connecticut Insurance Department on April 20, 2022.
  • Plans announced by the New York Department of Financial Services (NYDFS) to issue a Circular Letter setting forth best practices for insurers using AI and clarifying questions arising from a Circular Letter issued by the NYDFS on January 18, 2019 expressing concern that the use of external data, algorithms and predictive models to underwrite life insurance could lead to unfair discrimination and a lack of transparency to consumers and setting forth certain guidelines for the use such data and methods.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide