As AI-Assisted Research Advances, Experts Share Worries, Oversight Strategies; Collaboration Urged

Health Care Compliance Association (HCCA)
Contact

Health Care Compliance Association (HCCA)

Report on Research Compliance 21, no. 1 (January, 2024)

At Cornell University, institutional review board (IRB) members meet with the chief information security officer and a liaison to the general counsel’s office. Their regular attendance has been “really critical,” said IRB administrator Vanessa McCaffery. “Even though the IRB is tasked with thinking about participant protection, we also don’t want to approve anything that’s going to potentially cause a legal issue for participants or for the institution.”

McCaffery recommended adding experts such as these to IRB meetings during the Q&A period following a recent talk on artificial intelligence (AI) at the annual meeting of Public Responsibility in Research & Medicine (PRIMR).[1]

“Part of my job is to realize what I don’t know,” McCaffery said at the meeting, adding that she’s been learning about AI on her own and is “kind of the AI nerd on the IRB staff right now.”

In comments after the meeting to RRC, McCaffery said she was particularly “thinking about the potential legal and security implications for human participants and for researchers that come with use of new tech such as AI transcription tools and other countless, rapidly proliferating tools and technologies.”

Like McCaffery’s, many of the concerns raised by both the speakers at the PRIMR talk and audience members relate to ensuring the privacy and confidentiality of research participant information when AI is used, with questions about how HIPAA rules apply—or don’t, given they were implemented long before AI began being used.

But she’s also not alone in trying to understand AI and its implications for IRBs. For now, universities and other institutions employ a kind of DIY strategy in the absence of federal or even state regulations and guidance on AI. IRB officials have a double duty: analyzing how investigators whose protocols they’re reviewing are using AI and considering AI technologies that might actually help their own operations and oversight.

During the talk, Donella S. Comeau, M.D., a neuroscientist specializing in clinical AI development, presented elements of various governance structures for generative AI in a health care or research setting and for IRB oversight of AI.[2]

Regardless of whether there are laws, regulations, policies or guidance, governance “must address unique features of AI,” said Comeau, who is also a neuroradiology attending at Beth Israel Deaconess Medical Center and vice chair of the Mass General Brigham (MGB) IRB. As described on her slides, these are:

“Explainability: It is often difficult to know why an AI does what it does, so regulating it and auditing it in the event of an injury may be difficult.

“Non-directed behavior: AI doesn’t need explicit instructions and can develop its own behavior, which makes it difficult to delineate regulations that would cover all potential AI behaviors.

“Emergent behavior: Non-directed behaviors also mean that AI often behaves in unexpected ways, which can create unanticipated problems.”

[View source.]

Written by:

Health Care Compliance Association (HCCA)
Contact
more
less

Health Care Compliance Association (HCCA) on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide