Artificial Intelligence Briefing: NYDFS Releases Proposed Circular on AI Use in Insurance Underwriting, Pricing

Faegre Drinker Biddle & Reath LLP

New York’s Department of Financial Services releases proposed guidance on AI and other data sources in insurance underwriting and pricing — and you’ll note plenty of influence from Colorado’s governance regulation and the NAIC model bulletin. Meanwhile, legislators in several states are introducing bills targeting algorithmic discrimination, and the authors of the EU AI Act continue to make progress. We’re diving into these topics and more in the latest briefing.

Regulatory, Legislative and Litigation Developments

  • NYDFS Weighs in on AI and ECDIS. The New York Department of Financial Services has released a proposed circular on the use of AI and external consumer data and information sources in insurance underwriting and pricing. The proposal is thorough and has a lot in common with the Colorado governance regulation and the NAIC model bulletin. Be sure to check out the detailed sections on unfair/unlawful discrimination and testing. Comments are due by March 17.
  • States Consider AI Legislation. It’s still early in the legislative season, but a number of states are already considering AI legislation that could impact a variety of sectors. For example, bills have been introduced in Oklahoma, Vermont, Virginia and Washington that would target algorithmic discrimination resulting from AI used to make a “consequential decision,” which includes decisions affecting a consumer’s access to credit, criminal justice, education, employment, health care, housing or insurance. (The legislation is modeled on a California bill that failed to gain traction last year but is expected to be reintroduced.) In addition, New York is considering a bill that’s based on Colorado SB 21-169, which addresses unfair discrimination potentially resulting from insurers’ use of external consumer data and information sources.
  • Progress on the EU AI Act. Final discussions on the text of the EU AI Act are progressing in respect of the technical details and drafting. There have been no signs yet of a substantive renegotiation of the political agreement reached before the end of 2023. The aim is to complete the legal text by the last week of January, although this may be subject to further delays. A leaked final draft version circulated by European commentators indicates that there are a number of significant changes which will take time to finalize. In parallel, the UK government intends to publish key tests that will be applied before new laws on AI can be passed, reflecting the government’s cautious and light-touch approach to an AI regulation.
  • HHS Releases Rule Addressing Clinical Algorithms. On December 21, the Office of Management and Budget received the draft version of the HHS-Office of Civil Rights’ final rule addressing clinical algorithms. Under Section 1557 of the Affordable Care Act (the U.S. Department of Health and Human Services’ “kitchen sink” nondiscrimination statute), the proposed rule clarifies that a health insurer participating in certain federally funded programs and activities must not discriminate against any individual on protected bases through the use of clinical algorithms in its decision making. We tend to see rules released 60-90 days after OMB receives them, so we expect the rule to drop sometime in the first quarter.
  • HFSC Forms AI Working Group. On January 11, the House Financial Services Committee announced the formation of a bipartisan working group on Artificial Intelligence, which will be led by Digital Asset, Financial Technology and Inclusion Subcommittee Chairman French Hill (R-AR) and Subcommittee Ranking Member Stephen F. Lynch (D-MA). The working group will consider how AI is impacting the financial services and housing sectors, how existing laws address the use of AI and how lawmakers “can ensure that any new regulations consider both the potential benefits and risks associated with AI.”
  • FINRA Highlights AI Risks. For the first time, the Financial Industry Regulatory Authority (FINRA) identified AI as a key emerging risk in its annual FINRA Annual Regulatory Oversight Report. The self-regulatory organization, which enforces rules on brokers and broker-dealers, noted that generative AI tools raise potential “concerns about accuracy, privacy, bias and intellectual property, among others.” It also noted areas that member firms should give particular focus to when considering the use of AI and said that firms should be mindful of potential regulatory changes on the horizon.
  • FTC Enforcement Related to Privacy Commitments. A January 9 blog post by the Federal Trade Commission addresses companies that develop and host AI models and make them available to third parties. The post notes that such “model-as-service companies” have an incentive to “constantly ingest additional data,” which can conflict with their obligation to protect users’ data. “This risk is particularly salient given that customers may reveal sensitive or confidential information when using a company’s models, such as internal documents and even their own users’ data.” The post pointedly reminds model-as-service companies that “[t]here is no AI exemption from the laws on the books,” and that failure to honor privacy commitments made to users and customers can be the basis of FTC enforcement action.

Key Upcoming Events

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Faegre Drinker Biddle & Reath LLP | Attorney Advertising

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide