The use of AI in the healthcare sector: opportunities and challenges

Dentons
Contact

Dentons

Artificial Intelligence (“AI”) is a disruptive technology, which is rapidly affecting many industries and human activities. As reported by the European Commission in its White Paper on Artificial Intelligence, AI systems can bring benefits to global society and the world economy, finding new solutions to some of the most compelling challenges of our times. Nevertheless, it may also raise new risks and concerns, including for the protection of personal data and with respect to product safety, which have to be addressed.

Although the use of AI in the healthcare sector may be seen as a developing, hidden revolution, it is indisputable that certain medical specialties (i.e. radiology, different branches of surgery, epidemiology) are already greatly benefitting from the deployment of such disruptive technologies, as recently experienced during the fight against the COVID-19 pandemic.

Indeed, the deployment of AI tools has proved extremely useful in increasing the ability of doctors and healthcare professionals to better and faster understand the needs of the patients they care for, providing, for instance, for more accurate and faster diagnosis, while reducing the risks of erroneous evaluations. At the same time, practitioners may now rely on augmented reality devices and robots for precision surgical operations. IoT and wearable devices enable real-time monitoring of the users’ heart conditions, making it possible to detect problems even before they occur.

AI may also prove particularly valuable at the governmental level. In particular, by interpreting and inferring a vast amount of data, AI can help discover new treatments or provide governments with important data that may be used to identify patterns in order to understand where intervention and investments are needed most.

AI and healthcare: risks and challenges

As AI becomes more popular in the clinical world, the Joint Research Center of the European Commission recently published a Science for Policy report on Artificial Intelligence in Medicine and Healthcare. The report provides an overview of the current status of AI deployment in the healthcare sector, as well as an analysis of the impact that such technologies have on the sector.

Without prejudice to many ethical and societal impacts arising from the use of AI in the healthcare sector (e.g. humanization of care, individual free will, gene editing, etc.), the following are among the major challenges posed by the use of AI systems in the healthcare sector:

  1. Data quality and reliability. If the quality of the data used to train algorithms is always important in order to obtain a reliable output, in the healthcare system this is essential as biases in the data sets may result in a wrong diagnosis or inaccurate treatment for the patient.
  2. Security. Strong technical and organizational security measures should be implemented in order to defeat and detect attempts to manipulate the data used by the AI application. Security is particularly important in the healthcare sector because of the nature of the personal data processed (i.e. special categories of personal data such as health and genetic data), which are universally considered highly sensitive and their theft or manipulation may entail very serious consequences for the individuals, such as discrimination or physical harm.
  3. Data protection. The protection of the personal data processed by the AI applications should be at the forefront of the concerns. Full control should be granted to individuals, namely patients, over the processing of their personal data, especially considering that AI technologies may infer and further process the personal data, including unstructured data, recorded in medical reports and clinical trials. For the same reason, strict measures should be dedicated to monitor the parties involved in the processing, in order to ensure access only to those that strictly require it.
  4. Transparency and explainability. For most commentators, opacity is one of the main characteristics of AI. In addition to the widely known “black box” problem, AI technologies are often too complex to be understood by human beings. Without prejudice to the transparency principle set forth by the data protection legislation, questions should be raised over whether it should be acceptable that decisions on people’s lives are based on results provided by a technology that we are not entirely able to understand.
  5. Accountability. Who should be held liable in case of errors? Can the machine override a decision taken by a doctor or healthcare professional? Some ethical issues may arise, in particular in case of divergence between the doctor’s opinion and the assessment performed by the machine.
  6. Non-discrimination and fairness. Risks of growing inequalities may arise if only high-income countries and patients can afford an AI-based healthcare system. Furthermore, without proper safeguards, significant risks to the rights and freedoms of individuals will arise in the event that the data processed and inferred by AI systems is made available to third parties, such as insurance companies or banks (e.g. different contractual conditions may be applied on the basis of the medical case of the client).
  7. Trust. Can we currently affirm that a machine is able to perform better than a human doctor? AI systems should support doctors in making more informed choices in accordance with their expertise. Thus “life and death decisions” should still be taken by human doctors.

Policy challenges

In addition to the concerns summarized in the previous paragraph, the ability for regulators to manage and update the legal framework to the rapidly changing technologies also represents a key challenge posed by the use of AI.

As recently clarified by the EU Commission in its White Paper on AI, while EU legislation remains in principle fully applicable regardless of whether AI is involved, the legislative framework should be improved in order to address the specific risks and challenges posed by such technologies.

Ethics, liability, safety, data and consumer protection, security and opacity are all crucial aspects that should be taken into account by regulators.

Defining a legislative framework regulating the use of AI may be particularly complicated as any new legal instrument should at the same time be sufficiently flexible to allow technical and economic progress, as well as strict and analytical enough to provide legal certainty and a due level of protection for the individuals involved.

According to many commentators, the EU Institutions should play a leading role in setting an overarching legal framework, as well as guidance and standards applicable to the use of AI in the healthcare sector. The current pandemic may well become an occasion for reaching consensus over “human centric” AI standards.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dentons | Attorney Advertising

Written by:

Dentons
Contact
more
less

Dentons on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.