The NAIC’s development of guiding principles on artificial intelligence seeks to proactively avoid proxy discrimination, safeguard against other unfairly discriminatory outcomes, and apply risk management to address unfair discrimination. Extending the work of the NAIC, two states have introduced proposals that seek to address unfair discrimination in the use of data or algorithms, giving insurers using data or algorithms a steep climb:
- The Colorado legislature proposed SB 21-169 prohibiting the use of any external consumer data and information source, algorithm, or predictive model that unfairly discriminates against an individual based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, or transgender status.
- The Connecticut Insurance Department issued a notice on April 14, 2021, reminding all entities “to use technology and big data in full compliance with anti-discrimination laws.”
Colorado Senate Bill 21-169
In March 2021, Colorado started its journey by introducing legislation prohibiting certain activity and requiring insurers to submit information on the insurers’ use of data and algorithms to the insurance division. Bill 21-169 also grants the commissioner the right to examine and investigate an insurer’s use of data and algorithms in any insurance practice and to promulgate rules restricting or prohibiting the use of data and algorithms if it “bears no direct causal relationship to insurance losses” and unfairly discriminates. Bill 21-169 prohibits:
- Considering an individual’s race, color, national or ethnic origin, religion, sex, sexual orientation, disability, or transgender status (“protected status”).
- Directly or indirectly using any external consumer data and information source, algorithm, or predictive model that unfairly discriminates on the basis of protected status.
If an insurer is using any external data or model, Bill 21-169 also requires insurers to submit with the insurance division:
- A description of the external data used by the insurer.
- An indication of each insurance practice in which the insurer uses external data or models and the manner of such use.
- An attestation that each external data or model used by the insurer does not (a) intentionally or unintentionally use information concerning a person’s protected status or (b) result in proxy discrimination based on a person’s protected status.
- An assessment of whether the use of any external data or model may result in unfair discrimination based on a person’s protected status, and if so, an indication of the actions that the insurer has taken to minimize the risk of such unfair discrimination, including ongoing monitoring.
The Colorado proposal could be a sign of a rocky road ahead and raises a number of questions for the industry.
What Factors Can an Insurer Consider?
Bill 21-169 establishes a wide range of statuses that are protected and which cannot be considered for any “insurance practice.” Insurance practice is defined broadly to include marketing, underwriting, pricing, utilization management, reimbursement methodologies, claims settlement, and fraud detection. While Bill 21-169 includes statuses that have long been considered protected, it goes further by including disability. If disability remains a protected status, this may eliminate certain types of benefits or features from being offered in Colorado. For example, if the proposed insured is not currently disabled, a waiver of premium, monthly deductions, or payments in the event of a subsequent disability is a common benefit. If, however, disability may not be considered as part of underwriting, then these types of disability benefits will likely not be offered to any Colorado insureds, putting them on unequal footing.
What Is the Impact of “Indirectly”?
Insurers may use third-party data, algorithms, or both as a part of their underwriting process. The “indirectly” language extends the prohibition against unfair discrimination on the basis of protected status to data and algorithms supplied by third parties. Thus, insurers using third-party data vendors will need to:
- Understand the data used by third-party vendors as well as the operation of the third party’s algorithm.
- Require that the third-party vendors comply with the prohibitions of Bill 21-169.
- Document steps taken to minimize the risk of the third-party vendors’ failure to comply with the requirements of Bill 21-169.
How to Avoid and Monitor for the Unintentional Use of Discriminatory Data Points and Proxy Discrimination?
Possible means to comply with the above-discussed requirements to avoid and monitor for discrimination include:
- As posited by consumer advocate Birny Birnbaum, monitoring and testing outcomes.
- As suggested by the Society of Actuaries: (i) devote resources to assess potential bias in the external data or models to be used; (ii) exclude all known proxies for protected statuses; and (iii) ensure that decisions are based on principles of actuarial justification and fairness.
Must There Be a Direct Causal Relationship?
The Colorado proposal gives the commissioner the power to restrict the use of external data or models that do not have a causal relationship to insurance losses and result in unfair discrimination. Of note, the Casualty Actuarial Society (CAS) argues that correlated — not causal — variables provide more accurate premiums and are thus more desirable. CAS notes that eliminating correlated non-causal variables may produce less accurate ratings. Bill 21-169 by its terms grants the commissioner power only to restrict external data or models that are not causally related to the risk and result in unfair discrimination. Therefore, consumer data that does not bear a causal relationship to the risk appears to be acceptable if it does not result in unfair discrimination. Thus, this provision may not make the trek as daunting as it may seem.
Connecticut Department of Insurance April 14 Notice
While Connecticut is supportive of the insurance industry’s use of technology and expanding amounts of data, it nonetheless reminded insurers that they have a “continuing obligation to use technology and big data responsibly and transparently in full compliance with federal and state anti-discrimination laws.” Of particular importance from the notice:
- Connecticut will hold insurers using big data in their operations responsible and accountable even if the big data is provided by third-party vendors.
- Connecticut “has the authority to require that insurance carriers and third-party data vendors, model developers, and bureaus provide [it] with access to data used to build models or algorithms included in all rate, form, and underwriting filings.”
- Connecticut is concerned about how big data:
- Is “utilized as a precursor to or as a part of algorithms, predictive models, and analytic processes”;
- Is governed, “emphasiz[ing] the importance of data accuracy, context, completeness, consistency, timeliness, [and] relevancy”; and
- Algorithms are “inventoried, risk assessed/ranked, risk managed, validated for technical quality, and governed throughout their life cycle.”
The commissioner attached to the notice an extensive list “of the types of information that may be requested during the course of an examination specific to the usage of data brokers,” including with respect to:
- The examinee and who oversees data-related questions.
- Data sources, including all vendors and aggregators, and whether the data sources are checked for reliability, accuracy, consistency, and completeness.
- Data storage, including the privacy protections in place and the action plan and insurance coverage in the case of a breach.
- Data curation, including how the data is prepared for use and the methods of data validation, accuracy determination, and data correction used.
- Data documentation, including how the data transformation process is documented, by whom it is reviewed, and what corrective action is taken to prevent future errors.
Insurer’s Big Data Process and Procedures
Insurers need to consider how they will develop policies and procedures designed to comply with the Connecticut notice. How will an insurer check all data sources, including those from third parties, for reliability, accuracy, consistency, and completeness? Will the data and algorithms be housed internally or on a third-party vendor’s system? This has implications for the appropriate level of privacy protections and breach response processes that must in place. Additionally, insurers need to thoroughly document their policies and procedures so they may be reviewed upon examination.
Authority to Require Access to Data
The Connecticut notice’s assertion of authority to require “insurance carriers and third-party data vendors, model developers, and bureaus” to provide Connecticut access to all data and algorithms being used raises significant questions. It is unclear, for example, whether a state’s regulatory authority extends to third-party vendors. In fact, consumer representative Birny Birnbaum often gets shin splints over the fact that third-party vendors are not regulated or licensed.
Perhaps Connecticut believes that, even if it has no direct authority over third-party vendors, it could require insurers to provide the information. That, however, may conflict with the agreements between the insurer and third-party vendor. These agreements often limit or prevent the insurer from disclosing the data or algorithm provided by the third party beyond the insurer’s own use. While the issue of confidentiality of third-party data and algorithms has been discussed at the NAIC, the Connecticut notice makes no mention or accommodation for this issue. Accordingly, insurers may want their vendor agreements to require third parties to cooperate in responding to such regulatory requests.
These sprints by Colorado and Connecticut are a sign to the insurance industry of the rocky road ahead, as there will be more entrants to this race. For example, the NAIC’s Accelerated Underwriting Working Group is developing a report that will include a discussion of the consumer data used in algorithms and models. And the states continue to introduce proposals limiting the data that can be used by insurers, including restrictions on the use of claims history, credit scores, education, occupation, zip code, genetic information, and status as a victim of domestic abuse. There will doubtless be more heights to scale as more states begin their own excursions by introducing further restrictions on various types of data, which insurers will need to carefully monitor.