New frontiers: How AI is transforming the life sciences industry - Patient, Commercial And Regulatory Concerns

White & Case LLP
Contact

White & Case LLP

While the implementation of AI is growing apace, obstacles to deeper adoption still remain. These pressure points are consistent across subsectors: protecting sensitive data; integrating tools with legacy systems; clarifying legal and IP risks; and turning governance policies into real-world practices.

Data security tops the list of practical challenges, cited by 55 percent. The concern is clear: AI workflows often touch highly sensitive information—patient records, safety data, manufacturing parameters and commercial strategy. Missteps can trigger regulatory scrutiny, legal liability and reputational damage.

Security issues are made more complex by the way in which AI systems aggregate data from many sources, move it across teams and borders, and sometimes introduce third-party platforms into the mix.As one healthcare provider executive says: "Sensitive information may be exposed to cyber threats. Given the sophisticated cyberattacks that we see today, we do not want to risk broader use of data."

Rather than bolting on security as an afterthought, companies making steady progress tend to limit the volume of sensitive data in the first place. Common strategies include restricting how many systems a model touches, pulling only the fields needed and masking data for experimentation. Encryption in transit and at rest is standard, but there is growing emphasis on minimizing duplicates and knowing exactly where third-party vendors store or access data.

Security concerns sit alongside high costs (46 percent), legacy integration challenges (39 percent), scalability issues (38 percent) and skills gaps (38 percent) as day-to-day hurdles—and they are often intertwined. Older clinical and manufacturing systems were not designed for the volume and cadence of AI workflows, and connecting them safely takes time.

Indeed, there can be issues with integration because many AI tools are incompatible with outdated infrastructure and systems, meaning organizations may have an AI tool on the one hand and current infrastructure on the other, and these cannot be easily bridged.

Moreover, the talent needed to stitch modern data tooling into regulated environments remains in short supply, which can compound integration delays even when funding is available. "We've been struggling with skills gaps for completing AI-related projects," notes the head of technology of an animal health company in India.

Some respondents also cited the difficulty of retaining AI talent in competitive markets, particularly where public-sector salaries or rigid hiring structures make it hard to match industry benchmarks.

What are the practical obstacles to broader use of AI in your company?

Legal and IP concerns

Legal concerns are dominated by two issues: patient privacy and data protection (42 percent) and contractual/licensing risk (42 percent). The breakdown varies by subsector. Healthcare providers, for example, place far more weight on privacy (66 percent) than any other respondent type.
"If we are unable to protect patient data, we risk reputational damage," says the COO of a healthcare provider. "Mitigating the risk of legal claims and settlements is important to avoid any financial pressure on the company."

For pharma companies, this appears to be less of a concern because the use of AI in drug development involves mapping molecules and their mechanisms of action to try and identify targets. This means that there are inherently fewer privacy and individual personal data protection issues for those organizations.

42%

Percentage of respondents who place patient privacy and data protection among their top two key legal risks relating to the implementation of AI

Animal health companies are more likely to cite licensing risk (60 percent), which aligns with their broader use of third-party tools and reliance on data from dispersed clinics and farms. Medical device companies, meanwhile, frequently highlight cross-border jurisdictional issues (40 percent) and licensing complexity (44 percent), given the multi-market nature of product development, field connectivity and post-market surveillance.

These concerns are not theoretical. Many valuable AI inputs—chemistry datasets, proprietary models, third-party databases and data sourced from contract research organizations (CROs)—are governed by restrictive contracts. Using them for training or fine-tuning without clear rights can lead to breach-of-contract claims, even when copyright law is less definitive.

"There could be the risk of using copyright materials for training AI," says the director of innovation of a Taiwanese healthcare provider. "Developers who do not have complete knowledge of these issues may do so unknowingly."

What are the key legal risks that relate to the implementation of AI?

The IP question

IP protection is also a grey area. While 31 percent of respondents are very concerned about potential IP infringement from using AI, another 51 percent are somewhat concerned. Just 18 percent are not worried. These views are fairly consistent across sectors.

Meanwhile, 60 percent of all respondents judge current protections for AI-assisted outputs to be weak, rising to 80 percent in animal health. In regional terms, in Asia-Pacific, the figure hits 85 percent, compared with 44 percent in EMEA. Uncertainty over who owns model-influenced designs or content, and whether those outputs meet patentability or authorship thresholds, is a recurring theme.

Enforcement uncertainty compounds the problem. When model-assisted content is shared across jurisdictions, companies face a patchwork of standards governing authorship, database rights and inventorship, each of which can affect whether AI-influenced innovations can be protected or commercialized.

Are you concerned about potential liability for intellectual property (IP) infringement related to the use of AI systems?

In your assessment, how robust are the protections provided by IP laws for products or content generated by AI?

Governance, training and board oversight

Many companies are taking steps to improve oversight. A solid majority (63 percent) now have formal AI training programs in place, rising to 72 percent in human pharma. This trend is likely to accelerate.

Under the EU AI Act, companies that develop, deploy or use high-risk AI systems—including many tools used in clinical decision-making, diagnostics and other medical device software—must ensure that relevant personnel receive appropriate training.

Training must cover how the system works, the intended use, known limitations and how to exercise meaningful human oversight, particularly where patient safety or product quality is at stake. This includes not only technical staff, but also those involved in the use, supervision and governance of AI systems. Under the EU AI Act, these requirements have been in effect since February 2025, meaning companies must act now to ensure compliance, particularly those operating in EU markets or selling high-risk AI systems there. When model-assisted content is shared across jurisdictions, companies face a patchwork of standards governing authorship, database rights and inventorship.

The goal is to ensure that humans remain meaningfully involved and accountable when relying on complex or opaque systems. In practical terms, this means companies must formalize training programs, keep records of participation and update materials in line with system changes or regulatory updates.

For multinational life sciences organizations, even if headquartered outside the EU, especially those marketing products in the EU, these training requirements are fast becoming non-negotiable. As a result, until a change of the AI Act, documented, role-specific training is shifting from best practice to regulatory obligation.

Human pharma also leads on broader governance. Nearly two-thirds (64 percent) report having an AI risk-management strategy, compared with 40 percent in devices. This reflects pharma's more advanced use of AI in R&D and safety monitoring. Meanwhile, animal health firms report the highest incidence of AI-specific use policies (60 percent), driven by the fragmented nature of their clinical settings and data sources.

Board-level attention varies. Overall, 48 percent of respondents say AI is frequently discussed at the board level, but the figure rises to 64 percent in medical devices and 56 percent in human pharma. Only 32 percent of animal health companies and 30 percent of healthcare providers report the same. Regionally, North America leads (60 percent), followed by EMEA (47 percent) and Asia-Pacific (39 percent).

A vice president of a life sciences multinational notes: "AI is not a magic wand, so we're careful about piloting and ensuring compliance, especially on privacy and regulatory fronts. Internally, we've got AI tools available across the business, and there are flagship AI projects led by our executive committee focused on simplification and optimization."

Which of the following do you currently have in place in your company?

Is the governance of AI implementation discussed at board level?

Legal uncertainty

There is also a pervasive sense that legal frameworks are still catching up. Two-thirds of respondents (66 percent) agree that lack of legal certainty is a barrier to adoption. That figure jumps to 84 percent in animal health.

The concerns are not just theoretical. Respondents point to shifting requirements around documentation, life cycle monitoring, data transfers and contracting norms. In clinical settings, uncertainty also surrounds how professional accountability or product liability will work when AI contributes to decisions.

To what extent do you agree or disagree with the following statement: “Lack of legal certainty is a barrier to the use of AI in my company”

As the CEO of a healthcare provider based in Southeast Asia says: "Since AI is still evolving, and regulators are trying to control the scope of usage, some of the legal challenges remain unknown. Especially when it comes to selected aspects, the laws are changing and it creates uncertainty for us."

Even as regulatory guidance improves, the diversity of stakeholders and jurisdictions involved means AI governance will remain complex. For now, companies must build processes that are flexible, transparent and grounded in clear documentation, even when the rules remain in flux.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© White & Case LLP

Written by:

White & Case LLP
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide