CTA Publishes New Standard For Healthcare AI

MoFo Life Sciences
Contact

MoFo Life Sciences

In February, the Consumer Technology Association (CTA) announced a voluntary standard for healthcare products that use Artificial Intelligence (AI). This standard follows the Food and Drug Administration’s (FDA) recently published action plan for regulatory oversight of AI and machine learning-based medical software. FDA ultimately determines the regulatory process and requirements to legally market AI healthcare products, generally reviewing products to make sure they are safe and effective for the consumer market. FDA has worked to adopt a new regulatory framework for AI/ML software medical devices for years now. But as ANSI/CTA‑2090 suggests, regulatory approval is just one consideration for this novel product category.

CTA’s standard focuses on the trustworthiness of healthcare AI products. While this factor is not typically a regulatory requirement, CTA claims that trust is critical to the acceptance and successful implementation of novel AI technologies, especially those in the healthcare setting. The publication represents consumer and manufacturer perspectives on the key features that AI‑based medical software should offer to build trust. The standard focuses on three main trustworthiness areas—Human, Technical, and Regulatory Trust—and outlines baseline features and requirements in each category for products to incorporate.

Human Trust

The Human Trust factors focus on facilitating transparent and smooth human interaction with the product. The standard encourages developers to provide users with clear descriptions of what the AI predicts, its clinical parameters, limitations, and performance parameters. Users should be able to understand what the system is capable of and how it may make mistakes. The product should present information that is contextually relevant and consistent with social norms. Programs should incorporate a fault-tolerant user interface appropriate for the target audience. The standard notes that the degree of human trust required increases as the level of AI autonomy increases, so clear explanations of the AI’s autonomy and human input are required.

Technical Trust

To build trust in the product’s ability to perform as technically expected, developers should understand potential bias in the program’s data set and mitigate it to promote system fairness. Applicable data privacy and security requirements should be followed, in addition to being transparent about what information is collected and why. Data sources and any merging or processing of the data used to train the program should also be disclosed to build trust.

Regulatory Trust

The final component addresses how the product is accepted within the highly regulated health care industry. Various federal and state agencies have oversight of different parts, from medical devices to provider licensure and care standards.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© MoFo Life Sciences | Attorney Advertising

Written by:

MoFo Life Sciences
Contact
more
less

MoFo Life Sciences on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.