NAIC’s Privacy Protections (D) Working Group begins revising privacy model laws

Eversheds Sutherland (US) LLPOn May 5, 2020, the NAIC’s Privacy Protections (D) Working Group met via conference call. The Working Group was formed on October 1, 2019, under the Market Regulation and Consumer Affairs (D) Committee on a referral from the Innovation and Technology (EX) Task Force. The Working Group is charged with: (i) reviewing long-standing state insurance privacy laws regarding the collection, use and disclosure of personal information gathered in connection with insurance transactions and (ii) making recommendations regarding any updates or modifications to NAIC models. This charge comes in the wake of significant changes to the privacy law landscape in the US, most notably in the form of the California Consumer Privacy Act (the CCPA). The CCPA provides an array of privacy rights to consumers that are similar, but not identical, to rights provided under Europe’s General Data Protection Regulation (GDPR), including the right to access, delete and prohibit the sale of information that businesses collected about them. Most of the personal data that insurers collect is currently exempted from the CCPA so that insurers are, for the most part, relieved from complying with the CCPA’s new consumer rights regime. However, as more states consider privacy laws similar to, or more aggressive than, the CCPA, it is an open question whether insurers will be similarly exempted.

The Working Group has begun the process of assessing the NAIC’s existing model laws to consider adding elevated consumer rights and privacy protections similar to the CCPA, beginning with the NAIC Insurance Information and Privacy Protection Model Act (#670). As a first step, the Working Group exposed for comment a mark-up of Model Law #670 listing what the Working Group saw as key issues to consider. During its May 5 call, the Working Group began to walk through its exposed markup of Model Law #670 section by section and discussed comments received. In doing so, Working Group Chair Cynthia Amann (MO) noted several points where regulators already have a clear vision of the approach they would like to take:

  • In addition to Model Law #670, the Working Group also plans to address the Privacy of Consumer Financial and Health Information Regulation (Model #672). 
  • In a notable change since it first started, the Working Group plans to address healthcare privacy issues, both under Model #672 and under the NAIC Model Rules Governing Advertisements of Medicare Supplement Insurance with Interpretive Guidelines (Model #660). 
  • Regulators tend to agree that an updated model privacy laws should apply to all regulated insurance entities, so that TPAs and other third parties are included in the meaning of an “insurance support organization.”
  • The Working Group plans to address the definitions section of Model #670 last and intends to rely on existing standardized definitions that are already included in either the Market Regulation Handbook or IT Examination Handbook.

During the discussion, industry group representatives made a number of comments on the general approach that should be taken by the NAIC: 

  • Several commenters questioned using Model #670 as the initial base document for the Working Group’s efforts, noting that Model #672 is more widely adopted and used in states today compared to the older Model #672. In response, Chair Amann noted this was an important discussion point to keep in mind, including the possibility of combining Models 670 and 672. 
  • America’s Health Insurance Plans (AHIP) commented that it was important for the final updated product to include language both in the preamble and in the text of the law that the model act should be the exclusive standard applicable to insurers and other covered organizations. 
  • The American Council of Life Insurers (ACLI) commented that it was important for the Working Group to take into consideration new ways in which information is collected and provide clear guidance on how insurers will be expected to handle consumer Personal Information. 
  • ACLI also noted that there should be no distinction in requirements between different lines of businesses as there has been in the past under Model #670. Members of the Working Group agreed with this point and stated that they would assess taking language from Model #672 to address this issue. 
  • The American Land Title Association (ALTA) encouraged the Working Group to adopt the definition of “public information” used in the CCPA. Privacy laws generally exclude public information from being subject to consumer rights and protections since the information is already publicly available.

The NAIC’s Jennifer McAdams provided an update to the Working Group on state and federal privacy legislative initiatives. Fifteen states – Arizona, Florida, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Nebraska, New Hampshire, New Jersey, New York, Pennsylvania, South Carolina (applies only to biometric information), Virginia and Washington – have pending data privacy legislation. Many of the bills are comprehensive bills modeled after the CCPA, while others are modeled more after the GDPR. Others, particularly New York’s bill (carried over from last year), would go above and beyond either of those standards to introduce entirely new concepts, like New York’s idea of a “data fiduciary.” On the federal side, there are three significant bills, all of which address both data privacy and data security issues. However, these bills vary on the stringency of the standards they set (with at least one bill going above and beyond the CCPA), whether to create a private right of action, and the extent to which any law should preempt state law. Work on these proposed privacy and cybersecurity bills at both the state and federal levels has largely been stagnant following the onset of the COVID-19 pandemic.

NAIC’s Artificial Intelligence (EX) Working Group Closes in on Principles for the Use of Artificial Intelligence

The Artificial Intelligence (EX) Working Group also met via conference call on May 5, 2020. The Working Group, chaired by Commissioner Godfread (ND), is charged with developing regulatory guidance around the use of AI and making its recommendations to the Innovation and Technology (EX) Task Force by summer 2020. To this end, the Working Group is drafting principles for the development and adoption of AI in the insurance sector based on existing principles developed by the Organization for Economic Cooperation and Development (OECD).1 The draft includes five general principles that insurers should strive to meet in developing and adopting AI: (i) fair and ethical, (ii) accountable, (iii) compliant, (iv) transparent and (iv) secure, safe and robust. Each of these five principles is then further defined as part of the draft guidance document. During its call, the Working Group discussed comments received to the third draft of the principles, focusing on the wording of the draft guidance. Regulators addressed several notable issues during the discussion:

  • The scope of the principles is intended to include not only insurers, but also third parties such as rating and advisory organizations. Consistent with the principle of Accountability, regulators expect insurers to be responsible for any use of AI by their third-party service providers.
  • Disclosures regarding the functionality and use of AI should be different for consumers and regulators, with regulators having access to much more information than what will be useful or desired by consumers.
  • Disclosures should include revealing the kind of data being used, the purpose for using the data, and the consequences for all stakeholders. Regulators made clear they believe consumers have a right to know what information is used and where that information is coming from. How such disclosures will interact with disclosures required by existing (or new) privacy and other laws was not addressed.

Following the discussion on its May 5 call, the Working Group has since published a fourth revised version of the principles and will continuing working through additional comments and edits as it pushes to finalize the guidance by this summer. The Working Group has not yet provided a deadline for comments on the fourth revised draft nor set a date for its next call. We will continue to follow these issues and provide updates as they develop.

------------------------------------

1 In 2019, the OECD identified five principles for the responsible stewardship of trustworthy AI that calls on AI actors to implement them: (1) inclusive growth, sustainable development and well-being; (2) human-centered vales and fairness; (3) transparency and explainability; (4) robustness, security and safety; and (5) accountability.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Eversheds Sutherland (US) LLP | Attorney Advertising

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide