1. Introduction

The deployment of robotics and Artificial Intelligence (AI) including machine learning (ML) in healthcare and nursing is advancing rapidly. Artificial virtual and robotic agents are increasingly performing sophisticated therapeutic tasks that were previously provided exclusively by highly qualified medical professionals. Examples include chatbots serving as virtual psychotherapists; personal care robots monitoring health and safety of elderly people; and robots treating people with disorders such as anxiety, dementia or autism.

The growing inclusion of social robots in therapeutic situations raises a variety of unresolved legal and ethical issues—including risks to patient autonomy, human dignity and trust, the potentially life-threatening effects of inaccurate or malfunctioning technology, diminished privacy due to the reliance on enormous amounts of personal (sensitive health) data, new challenges to data security due to the cyber-physical nature of robots, and the problem of how to obtain informed consent to medical treatments based on opaque AI decision-making.

From this broad spectrum, the present article focuses on the protection of the health and safety of patients and care recipients under EU law. By contrast, other aspects, such as data protection, cybersecurity or the overarching question of patient autonomy, human dignity and trust are not at the center of this contribution, as this would require a separate study which would not only have to deal with EU law, but also with the law of the Member States. The specific issues surrounding the regulation of foundation models and generative AI, in particular ChatGPT, are also not discussed here in detail.

In the EU, medical devices, including software, are primarily regulated by the Medical Device Regulation (MDR), which has been directly applicable in all Member States since 26 May 2021. The MDR aims to ʻestablish a robust, transparent, predictable and sustainable regulatory framework for medical devices which ensures a high level of safety and health whilst supporting innovation’. To this end, medical devices may only be placed on the market or put into service if they bear a CE marking. The CE marking may only be affixed if the products meet essential safety and performance requirements and have successfully undergone a conformity assessment procedure.

Additional requirements will arise in the future from the recently adopted AI Act (AIA). According to Recital 1 AIA, the regulation aims to lay down ʻa uniform legal framework in particular for the development, placing on the market, putting into service and use of artificial intelligence in conformity with Union values’, by pursuing ʻa high level of protection of health, safety and fundamental rights’.

In particular, certain medical devices with AI components are classified as high-risk products. Consequently, a provider of a medical device does not only have to comply with the MDR, but additionally with the specific requirements for high-risk AI systems laid down in the AIA.

Whether the MDR and the AIA are sufficiently adapted to the specific challenges robotics and AI bring about for the therapeutic and care relationship is an open question that deserves addressing. From the analysis in this article, it will become clear that neither the MDR nor the AIA adequately addresses the risks to patient health and safety arising in this context. Against this backdrop, the article indicates areas of future research and provides recommendations as to which aspects should be regulated in the future.

2. The MDR

2.1 Qualification as Medical Device

Under the MDR, robotic systems and software, including AI systems, can qualify as medical devices. However, this is not always the case. Rather, there may be situations in which robots and AI systems are used in a medical context without being a medical device. Hence, the first question to be examined is under what conditions a device qualifies as a medical device, so that the MDR is applicable.

2.1.1 Definition of Medical Device

According to Article 2(1) MDR, a medical device is a product (including a software) which is intended by the manufacturer to be used for diagnosis, monitoring, treatment or alleviation of a disease, an injury or disability. In contrast, software cannot be regarded as a medical device if it fulfills a purely informational, archiving or displaying function and only stores or graphically displays patient data without evaluating the data and thus influencing a medical decision.

A further distinction must be made with respect to so-called lifestyle and wellness applications, which cannot be considered medical devices according to Recital 19 MDR either. These include, for example, applications that provide instructions for training exercises, give tips on nutrition or store the user’s weight or pulse. As the software in these cases does not serve a medical purpose, but only transmits information to the user without deciding specifics with regard to a disease or a disability, the MDR does not apply. In such cases, it seems problematic that no other special EU regulation exists to protect users.

2.2.2 Intended Purpose

A particular problem arises when the device can be used for medical purposes, but the manufacturer nevertheless tries to label the system as a non-medical product, for example, by relying on disclaimers indicating that the system should not be used for medical purposes beyond its intended use in a lifestyle environment. According to Article 2(1) and Article 2(12) MDR, the decisive factor for a product to qualify as medical device is the intended purpose which results from the labeling, instructions for use or advertising materials of the manufacturer. Therefore, the classification does not depend on the objective purpose, but on the subjective purpose of the manufacturer. As a result, a product can objectively serve medical purposes, but is not classified as a medical device due to the subjective intended purpose. At first glance, it is therefore within the manufacturer’s power to define whether the product is a medical device. Consequently, more and more manufacturers try to label their products as non-medicinal products to avoid the MDR regime.

However, the manufacturer’s right to determine the intended purpose is limited. The MDR specifies details that must be included in the description of the intended purpose and these consequently limit the manufacturer’s freedom to determine the intended purpose. For example, the manufacturer must provide information on the safety and performance of the product in the instructions and in the labeling, including information concerning the indication as well as contraindication, the patient group and the user profile (Annex I No 23.4(b) MDR). If contraindications or risks associated with the product are known or if the use of the medical device requires certain qualifications and knowledge of the user, the information must be provided by the manufacturer. Therefore, the subjective intended purpose is limited by objectively assessed risks, which must be set out in the instructions for use.

Whether the MDR places further limits on the manufacturer’s intended use to avoid circumvention of the MDR is currently unclear. In Germany, the Federal Supreme Court ruled in its landmark ʻActive Two’ case that the subjective purpose requirement is limited in terms of arbitrariness. If the manufacturer restricts the use of its product to certain purposes, these must be comprehensible and not arbitrary. If a manufacturer violates these requirements, the objective purpose becomes the decisive criterion.

The CJEU, however, has regularly emphasised in its decisions on the Medical Device Directive 93/42/EEC that it is the subjective intended purpose by the manufacturer that ultimately matters. On the other hand, the Court has not yet had to deal with cases in which a circumvention of the MDR by the manufacturer seemed likely. Therefore, it remains to be seen whether the Court will limit the manufacturer’s freedom to determine the intended purpose—for example by drawing on its case law on the prohibition of abuse of rights.

2.2 Conformity Assessment for AI-based Medical Devices

2.2.1 Overview

If an AI system or a robot qualifies as a medical device, this has broad consequences for the development and use of the product. Manufacturers must pass a conformity assessment procedure before they are allowed to release the device onto the market. Only if they successfully complete this procedure, can they CE-mark their product and place it on the market (Article 20 MDR). Medical devices are classified into four classes (I, IIa, IIb and III) as per Article 51(1) MDR, according to the intended purpose of the devices and their inherent risks. The higher the risk, the higher the class and the more stringent the conformity assessment procedure.

2.2.2 Static vs Dynamic Systems

Medical devices including robotic systems that are leveraging AI technology can be on a spectrum from ʻstatic’ systems that have learned and operate in a learned state to ʻdynamic’ systems that continuously learn from real world experience in the field. The specific characteristics of dynamic systems pose major challenges to the existing medical device legislation. The conformity assessment procedure is intended to demonstrate that the requirements for medical devices are fulfilled at the time of the conformity assessment. In other words, the focus is primarily on safety risks that exist already at the time the product is placed on the market, while risks that may arise later as a result of continuous learning are not sufficiently taken into account.

Consequently, notified bodies in the EU and regulators elsewhere do not currently certify dynamic AI systems. Rather, they require the software to be ʻfrozen’ at a certain point in the learning process to evaluate the software at that point. As a result, manufacturers must carry out a new conformity assessment every time a medical device learns after being placed on the market from real-world data resulting in a significant change in its safety and performance characteristics.

This requirement could hinder the development of innovative AI systems. However, considering the high risk of physical harm such as death, disability, or other diseases, the requirement seems appropriate. To mitigate its impact, there is a proposal that manufacturers train the AI system separately from the software which is already on the market. In this way, they could continue to train the product in a safe environment (for example, in a research department) without the need to re-certify the software at each learning step. Rather, the manufacturer could wait until, after a longer process of continuous learning, the product has improved to the point where they can release an update or upgrade that passes a new conformity assessment procedure.

Another way to address these issues could be to create a list of allowable changes and modifications that AI/ML-based medical devices can use to adapt to new data in real time without having to go through a new conformity assessment procedure, as long as the manufacturer has a quality management system in place to accommodate the modifications. According to this proposal, documented changes that alter performance but do not change the intended use or input type could be allowed as long as they do not pose a safety risk to patients. Only if the AI system exceeds the pre-defined scope of change or the original intended purpose, a new conformity assessment would be required. However, it should be kept in mind that even foreseeable and permissible changes may accumulate over time and lead to a deviation from the performance for which conformity was certified. For this reason, it seems appropriate to require regular audits for systems that are allowed to alter their performance.

2.2.3 Conformity Criteria for AI-based Medical Devices

The distinctive characteristics of medical devices based on ML raise a number of unresolved questions, even for ʻstatic’ or ʻlocked’ systems, regarding the criteria that must be met during the conformity assessment procedure. Two specific characteristics of ML systems seem particularly problematic. First, ML is functionally dependent on data and the quality of data. As a result, a medical device trained on a particular dataset may only be suitable for certain purposes or groups of people, such that it can predict diseases with sufficient accuracy only in certain individuals, but not in others. Closely related to this is the lack of predictability of some AI/ML models. Since learning systems are not explicitly programmed but are trained using thousands and millions of examples, so that the system evolves by learning from experience, the predictions or decisions of such systems might not be foreseeable in a particular case, even if the system relies on ʻlocked’ algorithms that no longer learn. In contrast, Annex I No 17.1 MDR requires that medical devices based on software or software that are devices themselves, shall be designed ʻto ensure repeatability, reliability and performance in line with their intended use’. Meeting these requirements can be challenging for ML systems, as their ʻautonomous behavior’ to generate outputs with limited or no human intervention can lead to unpredictable results that are difficult to assess at the time of the conformity assessment procedure.

A second concern is the ʻblack box’ nature of some ML systems, ie their opacity. Arguably, the MDR does not contain explicit provisions requiring medical devices to explain or render transparent decisions to the user, nor does the regulation preclude manufacturers from placing opaque devices on the market. However, according to the MDR, manufacturers have to demonstrate (and document) that the safety and performance of their devices reach the state of the art considering the devices’ intended use. Moreover, manufacturers must provide ʻsafety and performance information relevant to the user, or any other person, as appropriate’, tailored to ʻthe technical knowledge, experience, education or training of the intended user(s)’ (Article 23(1)(a) Annex I MDR). In order to comply with these requirements, a certain level of transparency is a prerequisite.

So far, neither regulators, notified bodies nor the literature have come to a consensus on how the MDR should be applied and, if necessary, improved or supplemented by the AI Act to address the previously described issues of AI-based medical devices (data dependency, lack of foreseeability due to autonomy, opacity). The last point mentioned, opacity, is of particular importance here, as it is currently unclear whether AI-based medical devices should have a minimum level of transparency before they can be released on the market. Some data scientists argue that regulators should only allow inherently interpretable algorithmic models but ban AI systems with algorithmic opacity that cannot be solved technically. However, studies show that some opaque AI systems (eg deep neural networks) show a much higher degree of accuracy and efficiency than transparent systems (eg deductive and rule-based systems). In such a situation, a trade-off must be made between AI’s accuracy and transparency.

The MDR (in contrast to the AIA) provides for exactly such a balancing in that it allows certain risks to be admitted as acceptable if they are outweighed by the corresponding benefits (Annex I No 4 MDR). Hence, inherent algorithmic opacity could be considered an acceptable risk, if the manufacturer can provide evidence that the benefits of using such an AI medical device outweigh the risks.

2.2.4 Lack of Standards for Human-Robot Interaction

Finally, there is also the problem that there are currently no established standards for the communication and interaction between AI-based devices and patients. Current requirements are primarily concerned with the physical safety of medical devices. In contrast, there is still a lack of standards for the specific dangers to mental health and other patient rights arising from the human-robot interaction.

In the EU, the current legal framework relies on the basic divide between medical device law, on the one hand, and medical services law, on the other. While medical device law regulates (at European level) the physical hazards of a product before it is placed on the market, the laws and ethical principles governing medical practice (at Member State level) are intended to ensure high quality health care services and, in particular, the protection of core patient rights (such as autonomy, human dignity and trust).

However, this regulatory dualism of product versus service is called into question, as medical treatment is no longer provided by humans, but by AI systems interacting directly with patients. Obviously, the law applicable to healthcare professionals cannot guarantee that the AI system will be of sufficient quality when interacting with patients. After all, the quality of an AI system is determined by the program code and not by the skills of the human operating it. Policymakers are therefore rightly advised that robotic and AI applications engaged with patients will likely also need to be bound by similar ethical guidelines and legal requirements as those that apply to health professionals.

3. The AIA

In light of the numerous gaps and unanswered questions surrounding the use of robots and AI in the context of the MDR, the following sections examine whether the AIA can provide satisfactory answers.

3.1. Overview of the AIA and its Implications for Medical Devices

3.1.1 Scope of Application: The Contentious Definition of ʻAI’

The AIA has a rather broad scope of application, not only with regard to its territorial scope, but also with regard to the definition of AI systems. According to the Act, an AI system is a ʻmachine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ (Article 3(1)). Recital 12 AIA states that the reference to ʻvarying levels of autonomy’ means that AI systems ʻhave some degree of independence of actions from human involvement and of capabilities to operate without human intervention’. However, it remains unclear what level of autonomy is required for software and robots to be considered ʻAI systems’. As a result, the scope of the AI Act appears to cover all types of medical software ranging from chatbots, medical device software embedded in electronic thermometers, blood glucose meters and patient ventilators, migraine or asthma episode prediction apps, to medical image analysis software for tumor detection.

3.1.2 Medical Devices as High-Risk AI Systems

The AI Act puts most emphasis on the regulation of so-called high-risk AI systems. According to the Act, high-risk AI systems are permitted on the European market; however, they are subject to compliance with mandatory requirements and an ex-ante conformity assessment before their market deployment (Article 8ff AIA). The mandatory requirements include: the creation of a risk management system (Article 9); quality criteria for training, validation and testing data (Article 10); technical documentation (Article 11 and Annex IV) and record-keeping (Article 12); provisions on transparency and user information (Article 13); obligations for human oversight (Article 14); and obligations concerning the accuracy, robustness, and cybersecurity of systems (Article 15).

However, all these requirements do not apply automatically. Most notably, the Act does not classify all AI applications used in the medical field as high-risk systems. Rather, ʻhigh risk’ includes only AI systems that are medical devices as defined in the MDR and subject to a third-party conformity assessment under that regulation (Article 6(1) and Annex I Section A, No 11 AIA). Since only medical devices classified as risk IIa MDR or higher under the MDR are required to undergo third-party conformity assessment, only these systems are regarded as high-risk under the AIA.

If a medical device is a high-risk AI system, the AIA complements the MDR in that the above-mentioned essential requirements for high-risk AI systems must be assessed as part of the existing conformity assessment procedure under the MDR (Articles 16(f) and 43(3) AIA). In practice, this means that the notified body conducting the conformity assessment under the MDR must not only review the requirements of the MDR, but also the specific requirements of the AIA for high-risk AI systems.

3.2 Inconsistencies Between the MDR and the AIA

The relationship between the MDR and the AIA is currently unclear. Since both legal acts should apply cumulatively, without a formal hierarchy clause in either the AIA or the MDR deciding which of the overlapping rules should be applied, a number of inconsistencies and contradictions arise.

The AIA deviates from many of the MDR’s concepts and definitions in a way that is not consistent. For example, the AIA uses different definitions for ʻimporter’, ʻputting into service’, ʻprovider’ and ʻdeployer’ than the MDR. Different definitions in the MDR and the AIA will not only make compliance with the two regulations more complicated but could also result in one set of technical documentation defining the same terms differently.

Moreover, the risk categories in the AIA deviate from those in the MDR. Whereas the AIA qualifies all medical devices in the meaning of the MDR as ʻhigh-risk’ if they are subject to a third-party conformity assessment, the MDR uses a more nuanced approach ranging from low risk (class I), medium risk (classes IIa and IIb) to high risk (class III). This can lead to the confusing consequence of an AI system being classified as ʻhigh-risk’ under the AIA, while being classified as ʻmedium risk’ under the MDR.

In addition, many of the mandatory requirements for high-risk AI systems in the AIA overlap in a contradictory way with the MDR’s requirements for medical devices without clarity as to which of them takes precedence. For example, both the MDR and the AIA require setting up a risk management system. However, whereas Annex I of the MDR requires risks to be reduced as far as possible, Article 9(4) AIA only refers to ʻminimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements’.

One solution would be to make the MDR a lex specialis with respect to the AIA to the extent that there is an overlap. Another solution could be for the AIA to take precedence over the MDR insofar as it contains more specific essential requirements to manage the unique hazards of AI. Regardless of which solution is chosen, it is most important that there is legal certainty on the relationship between the two pieces of legislation.

3.3 Lack of Protection for Patients and Care Recipients in the AIA

In addition to the inconsistencies discussed above, the AIA does not provide adequate solutions to the health and safety specific challenges faced by patients and care recipients in the context of AI. Although the Act aims to protect health, safety and fundamental rights, it focuses on companies rather than humans. And while the Act sets forth a number of obligations of providers to professional users (eg healthcare professionals)—for example with regard to transparency and information—it does not establish duties to individuals affected by an AI system (patients, care recipients or others).

Moreover, the Act—and this is arguably one of the most crucial points—provides for very limited individual rights. The only rights foreseen for individuals (patients and care recipients) are a right to lodge a complaint with a market surveillance authority (Article 85 AIA) and a right to explanation of individual decision-making (Article 86 AIA). Finally, the Act’s approach to risk categorisation creates an arbitrary distinction between high- and low-risk systems used in healthcare, leaving many systems unregulated. According to the Act, AI systems in the healthcare sector are not regarded as high-risk per se, but only if they are classified as medical devices under the MDR, which is only the case if the hardware or software is intended to have a medical purpose. Since many systems are not intended by manufacturers to have such a purpose, a large number of robotic systems and AI applications remain unregulated in the context of healthcare, such as robots and AI systems used in care for daily communication with the elderly, and applications which provide instructions for workouts, give tips on nutrition, or store the user’s weight or pulse.

Robotic and AI systems used in this sector, however, pose numerous risks to patients and care recipients due to their direct and indirect effects on the human body and mental health, even if they are not classified as medical devices. Accordingly, all robotic systems and AI applications used in healthcare should be subject to basic regulations and external audit to at least verify the accuracy of the claims made and to ensure the systems’ safety and effectiveness.

4. Conclusions

The use of robots and AI systems not only bring benefits but also severe risks to the health and safety of patients and care recipients as well as several other challenges. Neither the MDR nor the AIA provides satisfactory safeguards in this regard. The MDR only applies if the device in question has a medical purpose according to the manufacturer’s intention. As more and more manufacturers try to label their products as non-medicinal products (eg as lifestyle products or products for the care of elderly people without diagnosis/cure of diseases) to avoid the MDR regime, the question arises as to when such labeling must be considered an unlawful circumvention of the MDR. Another problem is that the MDR only obliges the manufacturers of a product to ensure adequate safety, whereas professional users (such as healthcare professionals) have no EU-wide harmonised obligations with respect to medical devices. The conformity assessment procedure provided for in the MDR also raises a number of issues with regard to robotics and AI, in particular, whether dynamic systems can be approved and how to test AI systems.

The AIA will most likely not solve these problems. Rather, it fails to address loopholes regarding the use of robots in the medical field. First, the Act does not establish obligations of manufacturers and deployers to patients. Second, it does not grant individual rights to patients except for the right to lodge a complaint with a market surveillance authority and the right to an explanation of individual decisions. Third, AI applications in the healthcare sector are not considered high-risk applications if they do not constitute a medical device of risk class IIa or higher as defined by the MDR.

In order to address the issues raised, a number of options should be considered. First, it is important to thoroughly consider the interplay between sectoral regulations (here: MDR) and horizontal regulations (here: AIA). Hence, it should be examined whether the horizontal approach taken by the AIA should be set aside in the medical field, in favour of a sectoral approach. In this way, the special features of robotic and AI devices in therapeutic settings and the care sector could certainly be better taken into account than having a legal act that lays down general criteria for all robotic and AI applications.

In addition, a number of other measures should be taken to protect the health and safety of patients and care recipients. Specifically, it is necessary to develop clear criteria as to which AI applications should be subject to an ex ante conformity assessment and require regulatory approval. Furthermore, legislators should develop rules and private standardisation organisations should draft technical standards for the safety and security of (embodied) AI in healthcare, including for human-robot interaction. In particular, medical staff should be trained and prepared on when and how to use robotic and AI applications. Moreover, robotic and AI systems should be used in a transparent manner. This requires internal transparency (on the part of the manufacturer vis-à-vis professional providers), as this is an essential prerequisite for healthcare professionals to make their own decisions regarding the use of AI systems and to comply with their duty to inform patients, who must give informed consent to treatment.

Additionally, for systems which continue to learn after certification (albeit on a limited basis), both the manufacturer and professional healthcare providers should engage in ongoing ex post monitoring to ensure that potential errors can be identified and corrected in a timely manner. Finally, patients should be granted individual rights to enable them to exercise their rights effectively and to claim compensation in the event of damage.

All this calls for further debate, while at the same time raising the question of whether harmonisation at the European level is possible at all. Since the EU has only limited competences in the healthcare sector, it must be assessed which rules can be adopted at the European level and which at the national level.

What is needed in any case, is a public discussion about the extent to which we, as a society, are prepared to replace human therapists with machines. To this end, future (empirical) research should study both direct and indirect effects on the therapeutic relationship, as well as effects on individual agency and identity. The increasing involvement of social robots and AI systems in therapeutic situations should not simply be accepted, but rather questioned and critically evaluated with regard to both their opportunities and risks.

Acknowledgments

This work is part of the ʻGeriatronics’ project, a lighthouse initiative of the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University Munich (TUM).

  • 1
    Nicole Martinez-Martin and Karola Kreitmair, ʻEthical issues for Direct-to-Consumer Digital Psychotherapy Apps: Addressing Accountability, Data Protection, and Consent’ (2018) 5(2) Journal of Medical Internet Research Mental Health <https://mental.jmir.org/2018/2/e32/>; Amelia Fiske, Peter Henningsen and Alena Buyx, ʻYour Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy’ (2019) 21(5) Journal of Medical Internet Research 2 <www.jmir.org/2019/5/e13216/>. All URLs were last accessed 1 June 2024.
  • 2
    Joost Broekens, Marcel Heerink and Henk Rosendal, ʻAssistive social robots in elderly care: a review’ (2009) 8(2) Gerontechnology 94, 95 <https://doi.org/10.4017/gt.2009.08.02.002.00>.
  • 3
    Sarah M Rabbitt, Alan E Kazdin and Brian Scassellati, ʻIntegrating socially assistive robotics into mental healthcare interventions: Applications and recommendations for expanded use’ (2015) 35 Clinical Psychology Review 35, 36 <https://doi.org/10.1016/j.cpr.2014.07.001>.
  • 4
    Ruby Yu and others, ʻUse of a Therapeutic, Socially Assistive Pet Robot (PARO) in Improving Mood and Stimulating Social Interaction and Communication for People with Dementia: Study Protocol for a Randomized Controlled Trial’ (2015) 4(2) Journal of Medical Internet Research Research Protocols <https://doi.org/10.2196/resprot.4189>.
  • 5
    Charline Grossard, ʻICT and autism care: state of the art’ (2018) 31(6) Current Opinion in Psychiatry 474 <https://journals.lww.com/co-psychiatry/Fulltext/2018/11000/ICT_and_autism_care__state_of_the_art.9.aspx>; Paolo Pennisi and others, ʻAutism and social robotics: a systematic review’ (2016) 9(2) Autism Research 165 <https://onlinelibrary.wiley.com/doi/10.1002/aur.1527>.
  • 6
    Fiske, Henningsen and Buyx (n 1) 4.
  • 7
    Eduard Fosch-Villaronga and Angelo Jr Golia, ʻThe Intricate Relationships between Private Standards and Public Policymaking in the Case of Personal Care Robots. Who Cares More?’ in Paolo Barattini and others (eds), Human-Robot Interaction: Safety, Standardization, and Benchmarking (Chapman & Hall 2020) 9.
  • 8
    Deutscher Ethikrat (German Ethics Council), ʻRobotics for Good Care – Opinion’ (2020) 43.
  • 9
    Ivan Glenn Cohen, ʻInformed Consent and Medical Artificial Intelligence: What to Tell the Patient?’ (2020) 108(6) Georgetown Law Journal 1425.
  • 10
    The EU has limited legislative competence in the area of health. Article 168(4) of the Treaty on the Functioning of the European Union (TFEU) only allows for harmonising measures regulating quality of safety of medicinal products and devices for medical use, whereas Article 168(7) TFEU explicitly recognises the sovereignty of Member States for (other) aspects of health policy, health services and medical care.
  • 11
    See thereto Mathias Karlsen Hauglid and Tobias Mahler, ʻDoctor Chatbot: The EU’s Regulatory Prescription for Generative Medical AI’ (2023) 10(1) Oslo Law Review 1 <https://doi.org/10.18261/olr.10.1.1>.
  • 12
    Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices [2017] OJ L 117/1. Additionally, medical devices can fall under the Machinery Regulation: Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC [2023] OJ L 165/1. For more details on the Machinery Regulation, see the paper by Tobias Mahler in this special issue: ie Tobias Mahler, 'Smart Robotics in the EU Legal Framework: The Role of the Machinery Regulation' (2024) 11(1) Oslo Law Review, 1-18.
  • 13
    cf Article 1 Regulation (EU) 2020/561 of the European Parliament and of the Council of 23 April 2020 amending Regulation (EU) 2017/745 on medical devices, as regards the dates of application of certain of its provisions [2020] OJ L 130/18. For some articles of the MDR, there is a transition period (cf <https://ec.europa.eu/commission/presscorner/detail/en/ip_23_23>).
  • 14
    Recital 1 MDR.
  • 15
    Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 1.
  • 16
    Ruth Beckers, Zuzanna Kwade and Federica Zanca, ʻThe EU medical device regulation: Implications for artificial intelligence-based medical device software in medical physics’ (2021) 83 Physica Medica 1 <https://doi.org/10.1016/j.ejmp.2021.02.011>; Diana Heimhalt and Wolfgang Rehmann, ʻGesundheits- und Patienteninformationen via Apps’ (2014) 6 Medizin Produkte Recht 197, 202; Roman Tomasini, Standalone-Software als Medizinprodukt (Shaker Verlag 2022) 112.
  • 17
    See also Helen Yu, ʻRegulation of Digital Health Technologies in the EU: Intended versus Actual Use’ in Ivan Glenn Cohen and others (eds), The Future of Medical Device Regulation (Cambridge University Press 2022) 106 <https://doi.org/10.1017/9781108975452>.
  • 18
    cf Heimhalt and Rehmann (n 16) 202; Tomasini (n 16) 112.
  • 19
    Yu (n 17) 106; Moritz Dietel and Ivo Lewalter ʻmHealth-Anwendungen als Medizinprodukte – Vereinbarkeit mit dem HWG und Ausblick auf die neue EU-Medizinprodukteverordnung’ (2017) 2 Pharma Recht 53, 54; Heimhalt and Rehmann (n 16) 202; Katrin Rübsamen, ʻRechtliche Rahmenbedingungen für mobileHealth’ (2015) 33 Medizinrecht 485, 486; Peter Von Czettritz and Tanja Strelow, ʻʻBeam me up, Scotty’ – die Klassifizierung von Medical Apps als Medizinprodukte’ (2017) 10 Pharma Recht 433, 434.
  • 20
    Yu (n 17) 108.
  • 21
    See eg Arjan van Drongelen and others, ʻApps under the Medical Devices Legislation’ (Dutch Ministry of Health Welfare and Sport 2018) 16.
  • 22
    Elisabetta Biasin and Erik Kamenjašević, ʻCybersecurity of medical devices. Regulatory challenges in the EU’ in Ivan Glenn Cohen and others (eds), The Future of Medical Device Regulation (Cambridge University Press 2022) 55 <https://doi.org/10.1017/9781108975452>; Julia Eickbusch, ʻDie Zweckbestimmung von Medizinprodukten’ (2021) 2 Medizin Produkte Recht 52, 56.
  • 23
    Eickbusch (n 22) 56.
  • 24
    ibid.
  • 25
    BGH (2014) NJW-RR 46ff; Heimhalt and Rehmann (n 16) 201; Rübsamen (n 19) 486.
  • 26
    Rübsamen (n 19) 486.
  • 27
    Council Directive 93/42/EEC of 14 June 1993 concerning medical devices [1993] OJ L 169/1.
  • 28
    Case C-219/11 Brain Products GmbH v BioSemi VOF and Others, judgment of 22 November 2012 (ECLI:EU:C:2012:742) para 30; Case C-329/16 Syndicat national de l’industrie des technologies médicales (Snitem) and Philips France v Premier ministre and Ministre des Affaires sociales et de la Santé, judgment of 7 December 2017 (ECLI:EU:C:2017:947) paras 21–26. See further Timo Minssen, Mark Mimler and Vivian Mak, ʻWhen does stand-alone software quality as a medical device in the European Union? – The CJEU’s decision in SNITEM and what it implies for the next generation of medical devices’ (2020) 28(3) Medical Law Review 615 <https://doi.org/10.1093/medlaw/fwaa012>.
  • 29
    See further Rita de la Feria and Stefan Vogenauer (eds), Prohibition of Abuse of Law. A New General Principle of EU Law? (Oxford University Press 2011); Martin Ebers, Rechte, Rechtsbehelfe und Sanktionen im Unionsprivatrecht (Mohr Siebeck 2016) 353ff.
  • 30
    The determinations of these classes are defined in Annex VIII MDR.
  • 31
    cf Interessengemeinschaft der Benannten Stellen für Medizinprodukte in Deutschland (Interest Group of Notified Bodies for Medical Devices in Germany), Questionnaire ʻArtificial Intelligence (AI) in Medical Devices’ (Version 4, 9 June 2022) A.1. <www.ig-nb.de/index.php?eID=dumpFile&t=f&f=2618&token=010db38d577b0bfa3c909d6f1d74b19485e86975>. For a slightly different categorisation, see US Food and Drug Administration (FDA), ʻProposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)’, Discussion Paper and Request for Feedback 2019, 3 <www.fda.gov/media/122535/download>, which defines a ʻlocked’ algorithm as an ʻalgorithm that provides the same result each time the same input is applied to it and does not change with use’, as opposed to an ʻadaptive’ algorithm that has the ability to continuously learn even after the medical device is distributed for use.
  • 32
    Barry Solaiman and Mark Bloom, ʻAI Explainability, and Safeguarding Patient Safety in Europe: Towards a Science-Focused Regulatory Model’ in Ivan Glenn Cohen and others (eds), The Future of Medical Device Regulation (Cambridge University Press 2022) 97 <https://doi.org/10.1017/9781108975452>; Ulrich Gassner, ʻIntelligente Medizinprodukte – Regulierungsperspektiven und Zertifizierungspraxis’ (2021) 2 Medizin Produkte Recht 41, 44.
  • 33
    cf with reference to the practice in Germany, Maria Heil, ʻInnovationsmöglichungsrecht oder Innovationshemmnis? Regulatorische Herausforderungen für KI-basierte Medizinprodukte-Software in der EU’ in Roman Grinblat, Sibylle Scholtz and Sophy Stock (eds), Medizinprodukterecht im Wandel: Festschrift für Ulrich M. Gassner zum 65. Geburtstag (Nomos 2022) 447, 463. See also Interessengemeinschaft der Benannten Stellen für Medizinprodukte in Deutschland (n 31) which points out at A.1. that only static AI (AI that has learned and operates in a learned state) is certifiable, but not dynamic AI (AI that continues to learn in the field), as the system must be verified and validated.
  • 34
    FDA (n 31) 3.
  • 35
    Yannick Frost, ʻKünstliche Intelligenz in Medizinprodukten und damit verbundene medizinprodukte- und datenschutzrechtliche Herausforderungen’ (2019) 4 Medizin Produkte Recht 117, 118; Anastasiya Kiseleva, ʻAI as a Medical Device: Is it Enough to Ensure Performance Transparency and Accountability?’ (2020) European Pharmaceutical Law Review 5, 15 <https://doi.org/10.21552/eplr/2020/1/4>.
  • 36
    Frost (n 35) 120; Kiseleva (n 35) 15.
  • 37
    ibid.
  • 38
    Kerstin Vokinger and others, ʻLifecycle Regulation and Evaluation of Artificial Intelligence and Machine Learning-Based Medical Devices’ in Ivan Glenn Cohen and others (eds), The Future of Medical Device Regulation (Cambridge University Press 2022) 19 <https://doi.org/10.1017/9781108975452>; European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry (COCIR), ʻArtificial Intelligence in EU Medical Device Legislation’ (May 2021) 11ff <www.cocir.org/fileadmin/Publications_2021/COCIR_Analysis_on_AI_in_medical_Device_Legislation_-_May_2021.pdf>.
  • 39
    Vokinger and others (n 38) 19.
  • 40
    ibid.
  • 41
    ibid 20.
  • 42
    Hussein Ibrahim and others, ʻHealth data poverty: an assailable barrier to equitable digital health care’ (2021) 3(4) Lancet Digit Health <https://doi.org/10.1016/S2589-7500(20)30317-4>; Andrew Wong and others, ʻExternal Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients’ (2021) JAMA Internal Medicine <https://doi.org/10.1001/jamainternmed.2021.2626>.
  • 43
    cf Anastasiya Kiseleva, Dimitris Kotzinos and Paul De Hert, ʻTransparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations’ (2022) 5 Frontiers in Artificial Intelligence 1, 16 <www.frontiersin.org/articles/10.3389/frai.2022.879603/full>.
  • 44
    Cynthia Rudin, ʻStop explaining black box machine learning models for high stakes decisions and use interpretable models instead’ (2019) 1(5) Nature Machine Intelligence 206 <https://doi.org/10.1038/s42256-019-0048-x>.
  • 45
    Rich Caruana and others, ʻIntelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission’ (2015) Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1721 <http://people.dbmi.columbia.edu/noemie/papers/15kdd.pdf>. Bernhard Waltl and Roland Vogl, ʻExplainable Artificial Intelligence – The New Frontier in Legal Informatics’ Jusletter IT (22 February 2018).
  • 46
    The AIA is rather one-sided because it focusses mainly on possible risks and their prevention, without mentioning possible benefits. Cf Martin Ebers, ʻTruly Risk-Based Regulation of Artificial Intelligence. How to Implement the EU’s AI Act’ (2024), <https://ssrn.com/abstract=4870387>.
  • 47
    Kiseleva, Kotzinos and De Hert (n 43) 1.
  • 48
    Fiske, Henningsen and Buyx (n 1); Hannah van Kolfschooten, ʻEU Regulation of Artificial Intelligence: Challenges for Patients’ Rights’ (2022) 59 Common Market Law Review 81 <https://doi.org/10.54648/cola2022005>; Eduard Fosch-Villaronga and Jordi Albo-Canals, ʻI’ll take care of you, said the robot’ (2019) 10 (1) Paladyn, Journal of Behavioral Robotics 77, 78 <https://doi.org/10.1515/pjbr-2019-0006>.
  • 49
    Fiske, Henningsen and Buyx (n 1) 5.
  • 50
    The AI Act applies directly to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU (Article 2(1) AIA). With this approach, the proposal follows the model of the GDPR which is well known for its ʻBrussels effect’, whereby markets transmit the EU’s regulations to both market participants and regulators outside the EU: see Anu Bradford, The Brussels Effect. How the European Union Rules the World (Oxford University Press 2020).
  • 51
    The definition is based on the OECD’s understanding of the term: see OECD, Explanatory Memorandum on the Updated OECD Definition of an AI System, OECD Artificial Intelligence Papers, March 2024, No 8.
  • 52
    European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry (COCIR), ʻFeedback on the Commission proposal for a European Artificial Intelligence Act’ (1 July 2022) 2 <https://cocir.peak-sourcing.com/fileadmin/Position_Papers_2021/COCIR_Feedback_AI_Regulation_-_1_July_2021.pdf>.
  • 53
    Janneke van Oirschot and Gaby Ooms, ʻInterpreting the EU Artificial Intelligence Act for the Health Sector’ (February 2022) Health Action International 7 <https://haiweb.org/wp-content/uploads/2022/02/Interpreting-the-EU-Artificial-Intelligence-Act-for-the-Health-Sector.pdf>.
  • 54
    Arne Thiermann and Nicole Böck, ʻKünstliche Intelligenz in Medizinprodukten’ (2022) 8 Recht Digital 333.
  • 55
    For a detailed discussion, see Wimmy Choi, Marlies van Eck and Cécile van der Heijden, ʻTheo Hooghiemstra and Erik Vollebregt, Legal analysis: European legislative proposal draft AI act and MDR/IVDR’ (January 2022) 16ff, <www.government.nl/binaries/government/documenten/publications/2022/05/25/legal-analysis-european-legislative-proposal-draft-ai-act-and-mdr-ivdr/Report+analysis+AI+act+-+MDR+and+IVDR.pdf>.
  • 56
    According to Article11(2) AIA, the technical documentation required under the Act must be integrated with the technical documentation required under the MDR.
  • 57
    Choi and others (n 55) 18.
  • 58
    ibid 19.
  • 59
    See in particular Annex I No 2: ʻThe requirement in this Annex to reduce risks as far as possible means the reduction of risks as far as possible without adversely affecting the benefit-risk ratio’.
  • 60
    cf Recital (1) AIA.
  • 61
    Van Kolfschoten (n 48) 106.
  • 62
    Nonetheless, these systems can arguably be considered as AI systems that are subject to transparency obligations under Article 50 AIA.
  • 63
    Van Kolfschoten (n 48) 108.
  • 64
    See eg Arjan van Drongelen and others, ʻApps under the Medical Devices Legislation’ (Dutch Ministry of Health Welfare and Sport 2018) 16.
Copyright © 2024 Author(s)

This is an open access article distributed under the terms of the Creative Commons CC-BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).