New frontiers: How AI is transforming the life sciences industry - The State Of The Market

White & Case LLP
Contact

 

 

White & Case LLP

Key takeaways

AI is reshaping the life sciences industry in ways that are no longer hypothetical. The question is no longer whether to adopt AI, but how, and where, it can drive the most value. Adoption, however, is advancing at different speeds across subsectors and geographies, and evolving regulatory frameworks are influencing how organizations scale, validate and govern these next-generation tools.

Across our entire survey sample, 74 percent of executives say AI is either crucial or very important to their business strategy. Within that, 68 percent of medical device companies and 56 percent of human pharma companies say AI is crucial to their strategy. By contrast, only 26 percent of healthcare providers and 28 percent of animal health firms share this view. Regionally, 53 percent of EMEA and 53 percent of North American respondents deem AI crucial, while Asia-Pacific more often selects very important (26 percent) or somewhat important (28 percent).

74%

Percentage of respondents who say AI is either crucial or very important to their business strategy

Even though healthcare providers less commonly view AI as crucial to their success compared with other life sciences segments, many are well attuned to the reality that further investment will be needed to keep pace with the market. "Given how widely AI is being adopted across the sector, it's important to our business strategy," says the CEO of a European healthcare provider. "That's why we're evaluating an increase to our AI budget."

How important is AI to your overall business strategy?

View full image

Different stages of maturity

Despite the broad intent, strategy maturity is still developing. Only 17 percent of respondents describe their AI strategy as very developed. This maturity varies sharply by subsector: In human pharma, 67 percent say their strategy is very or moderately developed, compared with 48 percent of medical device companies, 44 percent of animal health companies and 24 percent of healthcare providers.

Planned spending suggests AI adoption is set to accelerate: 28 percent of respondents expect to invest more than US$50 million in AI over the next 12 months—up from 22 percent last year. Much of this capital is likely to go to functions and subsectors where the data and infrastructure already support deployment.

How developed are your company’s AI strategies?

View full image

A wide variety of use cases

Use cases are clustering in areas that have historically delivered ROI and better customer and patient outcomes. In R&D, for example, 64 percent of all respondents use AI regularly, climbing to 88 percent in human pharma and 74 percent in medical devices, but falling to 22 percent for healthcare providers. This makes absolute sense as pharma companies manage substantial R&D budgets, which allows them to move toward AI initiatives, whereas healthcare providers often operate within tighter financial constraints.

Despite these budget limitations, respondents are pressing ahead with clinical applications of AI. Not only do 75 percent of all respondents use AI regularly for medical purposes, such as diagnostics, treatment support and adherence, at least 62 percent reported this application across every company type, highlighting broad-based adoption for these purposes among practitioners, even in instances where organizations lack an established or mature strategy.

Supply chain management is another area in which AI is highly applicable, enhancing everything from demand forecasting to lead time tracking and inventory optimization. Our research reveals that 43 percent use AI regularly for supply chain management, led by human pharma and medical devices (54 percent and 52 percent, respectively), with slower uptake in animal health (36 percent) and healthcare providers (20 percent). Organizations without clean historical demand data or SKU (stock-keeping unit)-level visibility can expect limited benefit from AI-powered forecasting and supply scheduling until those prerequisites are in place.

Compliance-facing usage is also trailing the more technical domains of R&D and medical provision: 61 percent say they use AI for regulatory compliance only occasionally. In legal functions, 51 percent do not use AI at all and 44 percent use it only occasionally—evidence that governance-adjacent workflows are a step behind labs, imaging suites and factories.

However, the figure for the legal function is likely to rise sharply in the coming years. Unlike other areas with long-standing experience in modelling, bioinformatics and analytics—such as R&D and clinical development, where machine learning has been embedded for years and will continue to evolve at pace—most legal departments have only recently encountered AI in a form that genuinely fits their needs. For them, AI maturity effectively begins with the advent of LLMs, which can be used for routine manual tasks such as document automation and translation. What's more, there is evidence that multinational life sciences companies are developing proprietary LLMs to mitigate confidentiality and data governance risks. Legal functions are therefore earlier in their AI adoption curve.

Similarly, there is a notable lag in adoption across commercial business functions. Only medical device companies report a notable foothold in sales and marketing, for example, in which 32 percent use AI regularly. Across the sample as a whole, just 16 percent use the technology for these day-to-day commercial operations.

While use cases may differ within various businesses, depending on size and specialization, a vice president of a life sciences multinational notes that AI is broadly used across the whole organization, in operations, business intelligence, medical affairs, marketing, sales and market access. "We're using it for everything from health technology assessment submissions to productivity improvements."

How much has your organization invested in AI-led initiatives over the past 12 months? And how much do you expect your organization to invest in AI-led initiatives over the next 12 months?

View full image

 

How frequently do personnel currently use AI within each of the following areas of your company?

View full image

Regulatory and legal considerations

As AI adoption in life sciences accelerates, companies face rising expectations from regulators—not just about whether AI works, but how it works; where it fits into existing processes; and what controls are in place to ensure safety, transparency and reliability. And the bar is not uniform across functions: The closer a system gets to patient safety or product quality, the more scrutiny it attracts.

In R&D and clinical development, guidance is emerging around the use of AI in target discovery, protocol design, data analysis and pharmacovigilance. The European Medicines Agency's (EMA) Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle indicates what regulators are looking for: clearly defined use cases, high-quality representative datasets, documented model pipelines and traceability from data inputs to final conclusions. These expectations are already shaping how assessors review submissions and conduct inspections.

Similar principles are shaping expectations in the US. The Food and Drug Administration's (FDA) Center for Drug Evaluation and Research has published draft guidance addressing how AI should be used to support activities that inform drug submissions, such as trial design, clinical data analysis and safety monitoring. The guidance stresses the need for human oversight, validation of model fitness and reproducibility across the model life cycle, particularly when AI-generated outputs are used in regulatory filings.

For AI integrated into regulated medical devices such as imaging tools, software for diagnosis or treatment support, and other Medical Device Software (MDSW) respectively, Software as a Medical Device (SaMD), the rules are firmer. Europe's AI Act introduces a high-risk category that captures, inter alia, AI-based medical devices. Systems in this category must meet obligations around quality management, data governance, explainability, human oversight and post-market monitoring. Most of these obligations take full effect August 2027, except for AI systems in connection with emergency calls, priority in emergency services or emergency healthcare patient triage systems, which are in scope from August 2026. If the European Commission's proposed amendments are implemented, the obligations regarding high-risk AI systems will come into force later.

In the US, the FDA has made room for adaptive AI through the use of Predetermined Change Control Plans. These allow developers to specify in advance which aspects of their models may change post-approval, along with the guardrails, monitoring protocols and performance thresholds that will be maintained. This creates a legal pathway for iterative learning, as long as the plan is detailed, risk-based and transparent.

AI used in manufacturing and supply chain settings is not governed by standalone AI rules. Instead, where systems influence product quality or production decisions, they fall under existing Good Manufacturing Practice (GMP) frameworks, requiring defined use, performance validation, audit trails and formal change control. However, FDA has issued a discussion paper and a draft guidance addressing AI in the manufacturing of drugs and finalized a guidance on "Computer Software Assurance for Production and Quality System Software" for devices, indicating continued movement toward regulation in this space.

The EU is also moving in this direction via proposed updates to its GMP guidelines, including requirements for the use of AI in the manufacturing of active substances and medicinal products. When AI is used in lower-risk areas, such as forecasting or logistics, proportionate controls still apply, centering on reliable data, documented assumptions and retained oversight.

Commercial functions, such as marketing, sales and launch planning, remain lightly regulated from a technical standpoint but are not risk-free. Issues around patient communication, algorithmic bias and AI-generated promotional content can all trigger legal or reputational concerns if left unmanaged. In these areas, firms are increasingly developing internal policies to ensure appropriate human review, record-keeping and content control.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© White & Case LLP

Written by:

White & Case LLP
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide