1. Introduction

Artificial intelligence (AI) has grown tremendously in the last few decades, bringing significant benefits to many fields, including the healthcare field. There has been a multitude of studies comparing AI decisions to physician counterparts, to show the success of such systems within the field and how they can simplify and relieve physicians from difficult decision-making. Regardless of its benefits, growing concerns towards the use of AI in healthcare have been raised, including the risk of bias and unfair decisions, and more predominantly, the issue of explainability, particularly when the use of AI in decision-making results in unexpected consequences, leading to liability claims. Fault-based negligence regimes have existed for many years; however, the use of AI has brought unprecedented challenges to these regimes, forcing regulators to rethink the legislative landscape to account for the growth of technology within industries, including the healthcare sector.

This paper showcases the impact of AI decision-making in medical negligence claims through a consideration of case-law from the England and Wales jurisdiction. In examining the challenges brought to medical negligence claims involving AI, this paper intends to evaluate whether the AI Act and proposed AI Liability Directive from the European Union (EU) achieve their aim of reducing the complexities of applying the law where AI is involved in decision-making. In doing so, this paper argues that the EU’s AI Act, combined with the proposed AI Liability Directive, does not go far enough to solve the current liability issues physicians face when making decisions with AI. This paper also looks to the future, by assessing how these challenges will develop as the use and understanding of AI advances further. A case study is used to showcase the failings of the legislative stance, with suggested recommendations striking a balance between protecting physicians and compensating victims who suffer an injury, in order to further contribute to the discourse on explainable AI in the healthcare sector.

2. Standard Medical Negligence Claims

Medical negligence claims rely on the traditional rules of tort law: a breach of an established duty of care, and a causal link between the breach of care and the injury suffered. With the ever-growing inclusion of AI-enabled decisions and treatments used in healthcare, this traditional application of negligence is flawed. In the English legal system, standard medical negligence claims have relied on the common law through the use of the Bolam and Bolitho test to assess the standard of care of a physician; rules which have been subject to scrutiny of their own for a number of years. Through an explanation of this precedent within the English legal system, the next section of the paper addresses how this system is challenged when physicians use AI to assist with decision-making.

In arguing whether a physician has breached the standard of care expected of them, the test first established in Bolam v Friern Hospital Management Committee provided the starting point for medical negligence claims within the English legal system. Now known as the ʻBolam test’, this case established that physicians must act in accordance with a responsible body of medical opinion to uphold their duty; if it can be proven that an act has fallen outside of this standard, a physician has breached their duty.

The Bolam test was subject to major criticism for providing excessive immunity to physicians and making it difficult for patients to succeed in negligence claims. Regardless of this, the test existed for four decades before the much-needed extension was given through the case of Bolitho v City and Hackney Health Authority, which adapted the Bolam test to state that the act from the physician also has to ʻwithstand logical analysis’. This addition was seen to achieve more of a balance for victims, empowering the court to question the credibility of medical opinion.

It is important to note that the standard of care in medical negligence differs within the English legal system for cases of informed consent, which is governed by the more recent case of Montgomery v Lanarkshire Health Board, forming the rules for disclosure of risk. According to the precedent of Montgomery, physicians have the duty to disclose all material risks of treatment and any reasonable alternatives to the suggested treatments. A risk is deemed ʻmaterial’ both subjectively, in that the particular patient would be likely to attach significance to the risk, or objectively, where a reasonable person in the patient’s position would likely attach significance to the risk. If a healthcare professional fails to disclose a material risk or reasonable alternatives, this would amount to a breach of care.

The third element needed for a successful medical negligence claim after proving a breach of care is causation. To establish causation, a link needs to be shown between the physician’s act and the injury suffered by the patient. Depending on the circumstances and context of the claim, a number of precedents can be relied on to assist in satisfying this element within the English legal system. The starting point for proving causation is known as the ʻbut-for test’, introduced in Barnett v Chelsea and Kensington Hospital, which poses the question of whether the injuries suffered would have occurred but for the act or omission of the healthcare professional; if the injuries suffered were due to the act or omission, causation would be established.

It is not always clear whether a physician’s act caused the injury suffered, and in cases where this is difficult to establish, the precedent of Bailey v Ministry of Defence can be relied on, which relaxes and broadens the but-for test, whereby it only needs to be proven whether the breach of care materially contributed to the injury suffered. If the act or omission made by the healthcare professional materially contributed to the injury, then causation would be established. As noted, the procedure for establishing a successful medical negligence case is strongly backed by case-law, allowing the likelihood of a successful case to be estimated effectively when advising clients. However, this process has faced a complex challenge with the introduction of AI within the healthcare setting, particularly when used to assist with decision-making.

3. The Challenges Brought by AI to Medical Negligence Claims

The influx of AI systems in healthcare has brought several benefits to the field through assisting physicians with decision-making and robo-surgeries, and bringing more efficiency to treatment plans, and due to these benefits, there is a steady increase in their use. Regardless of the benefits, there are some instances where unexpected injuries may occur, leading to the bringing of a new era of medical negligence claims, not only questioning the actions and decisions of physicians, but physicians who use and rely on AI when making decisions. This use of AI poses unprecedented challenges to the standard legislative regime, particularly concerning the establishment of a breach of care.

To begin, the Bolam and Bolitho test, explained in the above section, works on the assumption that doctors use their own judgement alone and are sole decision-makers. This assumption is clearly challenged when those circumstances differ, in scenarios where doctors make decisions with AI systems, or where decisions are made without doctors altogether. According to Bolam and Bolitho, the claimant would have to show that the physician has acted outside of what is considered responsible by other physicians to establish a breach of care, and hence, that they acted outside what is considered the ʻnorm’ in the standard healthcare setting. AI is built on the idea of innovation, changing the standard and ways in which things have been done to achieve maximum efficiency. This idea of AI is hoped to affect the healthcare setting in the same way, allowing AI to pave new ways in which illnesses can be treated, and hence going against the principles that underpin Bolam and Bolitho. Applying this idea to the Bolam and Bolitho test, the following scenarios could raise some challenging questions:

  1. A doctor uses a newly introduced AI system to assist with a health-care-related decision. Due to their act of using a new system, and being one of the first in the field to do so, would the doctor have to consider whether to use the system even if it has been approved within the healthcare setting? Applying the Bolam and Bolitho test, the claimant would have to show other doctors would not have used the system. If permitted into the healthcare system externally, is it a requirement or a choice to use the system, and would a choice lead to scrutiny?

  2. Following this, in a scenario where a physician has the choice of making a decision with and without the assistance of an AI system, they choose to use the system. Basing their actions on the decision of the system, an unexpected injury occurs. Would the physician have to explain their rationale for using the system to assess whether other physicians would agree the decision was responsible? If so, would this lead to a floodgate of cases in which physicians have to justify their use of AI to assist with decision-making where they could make a decision themselves, rendering the use of AI in healthcare useless, as risk of fault would be heightened?

  3. Following this, if there was a need for healthcare professionals to explain their reasoning for opting to use a system rather than not, how would this reasoning or justification be sufficient? Arguably, the healthcare professional would have to justify that the result produced by the AI system was in line with their own medical understanding. To be able to do this, would there need to be a level of understanding of the workings of the machine?

This continuance of these scenarios also raises the complex consideration of the possible future reliance on AI systems. Currently, AI systems can only be used to assist physicians with decision-making, in that the final decision needs to be made by a human, due to the current lack of public trust and accuracy in machines. However, in the future, if AI continues in its current trajectory, it can be assumed that some decisions may be made wholly by AI itself, which could lead to an over-reliance on such systems to improve efficiency within the health sector. This future standpoint could lead to further complex questions in scenarios where unexpected injuries occur, as liability would have to fall onto a human counterpart to ensure that claimants could be sufficiently compensated, and could include an example such as the following:

  1. In a scenario where this is widespread use of a system, with an application of the Bolam and Bolitho test, would it be possible that a physician’s decision not to use an AI system leads to a breach of the duty of care, in that other physicians would always have chosen to rely on the AI system if they were in the physician’s shoes? Could this then lead to a duty to use AI, where individuals to a certain extent are pressured into using AI to avoid liability claims?

  2. In future scenarios where a health care professional relies on an AI decision, and the decision made by the system leads to an unexpected injury, would the health care professional have to justify their reliance on the decision-making system to escape liability? To do this, would that mean that the health care professional must explain how the machine reached the decision made to justify such a reliance?

In scenarios 3 and 5, the question of explainability becomes prevalent. The concept of explainability has always been a major discussion point in the academic literature surrounding AI systems, particularly when systems are described as ʻblack-boxed’ systems. The term ʻblack-box’ refers to the technical infeasibility of gaining an understanding of how systems arrive at their output decisions, where machines are also unable to provide this explanation themselves. Even if systems were explainable and could generate an explanation on an arrived decision, would this healthcare professional themselves be expected to interpret that explanation and make it understandable to a patient?

It is not unusual for physicians to adapt the way they explain surgeries and decisions so that a patient can understand the information, and if this expectation was consistent for explaining the use and decision-making process of AI, physicians would need more than medical knowledge. It would be unfair to expect on top of an average of seven years of education to become a physician, that healthcare professionals then need additional technological understanding and training to enable them the ability to provide patients with full explanations of the future of healthcare.

The application of Montgomery is also an interesting discussion point when considering the collaboration of healthcare professionals and AI. As explained in the above section, this precedent only extends to the area of informed consent in the disclosure of risks, where the healthcare professional must disclose all material (viewed both subjectively and objectively) risks and alternative treatments to the suggested surgery.

In a scenario where a physician intends to complete a ʻrobo-surgery’ with assistance from AI robotics, it can be questioned whether the risks of using AI could amount to a material risk to the patient. Usually, the risks that need to be disclosed are from a medical standpoint, but if for example, the robo-surgery tool is a ʻblack-boxed’ system, would this need to be explained to the patient before they give consent to surgery, as this would still amount to something the patient themself, or objectively, a reasonable patient would want to know?

If this was the case, there is a question of whether this is sufficient, as based on the assumption of public trust, especially given the current stance, disclosure of AI risks and the lack of explainability would evidently ʻscare’ patients, making them opt for ʻnormal’ surgery rather than ʻrobo-surgery’. This would then be similar to scenario 1, in that the integration of AI into surgery would be rendered useless. The EU’s AI Act, discussed more thoroughly in the next section, includes the obligation that if not obvious, consumers or citizens must be informed that AI is in use which, particularly in this scenario, could delay the integration of AI into the healthcare world with the current lack of public trust.

Another issue stems from the concept of vicarious liability. In the English legal system, the healthcare provider is vicariously liable where a healthcare professional has breached their duty, allowing a scheme to ensure that the claimant can be compensated. It could be questioned whether vicarious liability would extend to harm caused by AI in hospitals or other facilities, or whether liability would instead extend to the manufacturer or software developer of the system. In the case where physician liability would extend to healthcare providers, yet AI liability would extend to a body external to the healthcare provider, it is possible that medical negligence cases involving AI will turn into warfare, with both providers intending for the other to be held responsible.

The above would increase the challenge to establish causation. If future estimations of AI ʻtaking over’ sectors come to fruition, AI could play a part in the majority of decisions made by physicians, meaning that physicians would have a basis for escaping liability by establishing AI as a novus actus interveniens and relinquishing liability on the external party. In scenarios where causation has been established, the use of AI could also bring issues in considering the remoteness of damage, in that the assessment of damages is usually based on the degree of foreseeability. If systems used have self-learning capabilities, as many AI systems do, the essence of foreseeability dissolves the longer along the lifecycle a system is, allowing circumvention of liability as it could be argued that the injury sustained was unforeseen.

These complexities ultimately highlight the need for specific and strict rules to address the distinctive issues this technology brings to traditional liability regimes. In the first of its kind, the AI Act, introduced by the European Commission, sets a standard for AI regulation. The Act is likely to influence nations across the globe, similarly to the effect of the EU’s General Data Protection Regulation (GDPR), highlighting the power of the ʻBrussels Effect’.

4. Current AI Regulatory Landscape

The European Union adopted its AI Act in June 2024. The Act applies horizontally and harmonises rules for the development and use of AI. The Act is part of the EU’s two-fold approach to regulating AI, alongside the proposal for an AI Liability Directive. The AI Liability Directive proposal was introduced in 2022 and aims to clarify the rules on non-contractual civil liability to ensure liability claims involving AI can be dealt with at a consistent level to human counterpart claims.

The Artificial Intelligence Act works on a risk-based approach, categorising AI systems upon the purpose in which they are used, with those posing an unacceptable risk banned. Systems that pose a limited risk are subject to heightened transparency obligations, including the disclosure of their use. Minimal-risk systems are permitted to be used without restriction, but voluntary codes of conduct are expected to regulate these systems, which include spam filters and AI-enabled gaming. The Act itself predominantly focuses on those systems categorised as high risk, with a number of requirements imposed on providers and deployers of such systems. AI used in the healthcare field, that fall under the Medical Devices Regulation (MDR), are considered to be high risk, and hence the majority of AI systems and devices used in the healthcare sector are categorised as high risk.

The AI Act applies to both providers (developers of AI systems) and deployers (any person, body, authority, or agency who put AI systems on the market, and not the end-user), and therefore, healthcare facilities that use AI would be considered deployers. There are a number of obligations placed on providers of systems, before such systems can be placed lawfully on the market, but the scope of this paper is focusing on deployers of such systems. Although not subject to as many obligations as providers, deployers of AI have requirements that they must uphold, including operating systems in accordance with the instructions of use, ensuring human oversight, monitoring the use of AI systems, informing providers regarding any serious incidents or malfunctioning, and conforming to existing legal obligations such as the GDPR and MDR.

The proposed AI Liability Directive, which is still under consideration, has introduced two main provisions for non-contractual liability. These include the power given to national courts to order the disclosure of evidence relating to high-risk systems that are suspected of having caused damage, and the introduction of a rebuttable dual presumption; one regarding causation, and the other regarding fault. Under Article 3 of the proposed Directive, which governs the court’s power to order the disclosure of evidence, Article 3(5) introduces a presumption of non-compliance with the duties of care, relevant to those who fail to disclose or preserve evidence as required. This presumption works as a deterrent to encourage the disclosure of evidence but also to expedite court proceedings.

The second presumption introduced by the proposed Directive is the existence of the causal link between fault and damage, which is based on a tri-fold criterion. For this rebuttable presumption to be triggered, there must be:

(1) non-compliance with an obligation relevant to the harm that amounts to a breach of care (which can be presumed under Article 3(5)),

(2) that it be reasonably likely that the defendant’s negligent conduct has influenced the output that gave rise to the damage, and

(3) that the output produced by the system gives rise to the damage.

5. Does the AI Act and Proposed Directive Address the Challenges to Medical Negligence?

The obligations imposed under the AI Act provide reassurance to deployers and those subject to AI decisions that the AI available on the market has gone through a series of testing and assessments; however, due to the nature of AI, these safeguards do not ensure that AI will always perform as expected. The AI Act solves issues for healthcare providers where systems act against what is expected and there has been non-compliance under the AI Act obligations by a provider, but a loophole remains in scenarios where compliance with the obligations has been satisfied by both providers and deployers, but unexpected injuries can still occur.

Without the triggering of the rebuttable presumption under the proposed AI Liability Directive through non-compliance, the affliction of liability will fall onto the involved physician and related healthcare provider, putting a weighted burden on those who choose to utilise AI. In addition to this, Recital 15 of the proposed Directive states that the Directive is intended to only apply to fully automated decisions, meaning that in the healthcare scenario where a physician is involved in ʻtaking’ the decision to the patient, the Directive would not apply. This limited scope has received major criticism, and its removal has been advocated for, particularly given its failure to uphold and respect the right to redress for consumers, in addition to its conflict with the right not to be subject to automated decisions under the GDPR.

Without the application of the proposed AI Liability Directive, it is unclear how effective enforcement will be relating to non-compliance with the AI Act, particularly when the provider and deployer of AI differ from one another. With the lack of the rebuttable presumption, the weight of liability falls back onto the physician, which re-emphasises the re-occurring question throughout this paper: to what extent does a physician need to understand an AI decision to rely on it and escape liability? Applying the current standard within the English legal system for medical negligence, this would mean that if other healthcare professionals had also relied on the decision, and that was deemed as ʻlogical’, a breach of care would not be proven, which would leave victims uncompensated, making it even more difficult to establish a successful medical negligence case.

Currently with civil claims, the burden of proof falls on the claimant, but if the claimant could argue that the physician should not have relied on a system (particularly if not used or opposed in other jurisdictions), would it be left to the physician to justify their reliance? If so, would this reliance entail an understanding of the system? If so, would they need to have the technical expertise to understand and explain the decision? If so, how would this work with systems described as ʻblack-boxed’ and inherently unexplainable? This takes the discussion back to scenarios 3 and 5 discussed earlier in this paper, showing that the EU legislative stance is inadequate in solving the main questions surrounding medical negligence claims.

6. Conclusions

It is clear that AI will continue to advance and transform sectors of society with the benefits of improving efficiency, reducing human error, and assisting with decision-making. In order to reap these benefits, it is imperative that regulation catches up with the technology in order to reduce the existence of the ʻpacing problem’. Currently, the EU’s AI Act and proposal for an AI Liability Directive fail to address the issues caused by negligence regimes applicable to the integration of AI into the healthcare setting. The current regulatory position leaves several technical questions unanswered, highlighting the future difficulties the courts will face when they apply the new rules.

A balance needs to be struck in medical negligence claims to protect victims and ensure that they can be fairly compensated, whilst also ensuring fairness and clarity to physicians. In an era where AI is already beginning to be a major feature in the sector, there needs to be a common understanding of what is expected by providers, and how citizens will be fairly compensated in the inevitable scenario of AI not performing as expected.

Acknowledgements

I would like to express my gratitude and thanks to Lee Andrew Bygrave and Vera Lúcia Raposo for the opportunity and support in writing and submitting this article.

  • 1
    For a systematic review of studies, see Jiayi Shen, Casper Zhang, Bangsheng Jiang, Jiebin Chen, Jian Song, Zherui Liu, Zonglin He, Sum Yi Wong, Po-Han Fang and Wai-Kit Ming, ʻArtificial Intelligence Versus Clinicians in Disease Diagbisus: Systematic Review’ (2019) 7(3) JMIR Med Inform 10010 <https://doi.org/10.2196/10010> last accessed 17 January 2024. All other URLs referenced in this article were last accessed on this same date.
  • 2
    See Mildred Cho, ʻRising to the Challenge of Bias in Healthcare’ (2021) 27 Nature Medicine 2079 <https://doi.org/10.1038/s41591-021-01577-2>; Mirja Mittermaier, Marium Raza and Joseph Kvedar ʻBias in AI-Based Models for Medical Applications: Challenges and Mitigation Strategies’ (2023) 6 npj Digital Medicine 113 <https://doi.org/10.1038/s41746-023-00858-z>; Natalia Norori, Qiyang Hu, Florence Marcelle Aellen and Francesca Dalia Faraci ʻAddressing Bias in Big Data and AI for Health Care: A Call for Open Science’ (2021) 2(10) Patterns 100347 <https://doi.org/10.1016/j.patter.2021.100347>.
  • 3
    For an overview, see Julia Amann, Alessandro Blasimme, Effy Vayena, Dietmar Frey and Vince Madai, ʻExplainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective’ (2020) 20 BMC Medical Informatics and Decision Making 310 <https://doi.org/10.1186/s12911-020-01332-6>.
  • 4
    Bolam v Friern Hospital Management Committee [1957] 1 WLR 582; Bolitho v City and Hackney Health Authority [1997] 3 WLR 1151.
  • 5
    See eg Simon Fox, ʻBolam is Dead. Long Live Bolam!’ (2019) 4 JPI Law 213; John-Paul Swoboda, ʻBolam: Going, Going… Gone’ (2018) 1 JPI Law 9.
  • 6
    Bolam (n 4).
  • 7
    See eg Alasdair MaClean, ʻBeyond Bolam and Bolitho’ (2002) 5(3) Medical Law International 205 <https://doi.org/10.1177/096853320200500305>; Margaret Brazier and Josè Miola, ʻBye-Bye Bolam: A Medical Litigation Revolution?’ (2000) 8 Medical Law Review 85; Lord Woolf, ʻAre the Courts excessively differential to the medical profession?’ (2001) 9 Medical Law Review <https://doi.org/10.1093/medlaw/9.1.1>.
  • 8
    Bolitho (n 4).
  • 9
    Ash Samanta and Jo Samantha, ʻLegal Standard of Care: A Shift from the Traditional Bolam Test’ [2003] 3(5) Clin Med (Lond). 443.
  • 10
    [2015] UKSC 11.
  • 11
    ibid.
  • 12
    See eg the ʻbut-for’ test established in Barnett v Chelsea and Kensington Hospital 1969] 1 QB 428; a relaxation of the ʻbut for’ test in Bailey v Ministry of Defence [2008] EWCA Civ 883 that established the ʻmaterial contribution’ test; and the ʻincreased contribution’ test, applied only in lung cancer cases, established in Fairchild v Glenhaven Funeral Services Ltd [2002] UKHL 22.
  • 13
    [1969] 1 QB 428.
  • 14
    [2008] EWCA 883.
  • 15
    For an overview, see Farah MagrabiElske AmmenwerthJytte Brender McNairNicolet F De KeizerHannele Hyppönen, Pirkko NykänenMichael RigbyPhilip J ScottTuulikki Vehko, Zoie Shui-Yee Wong, Andrew Georgiou, ʻArtificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications’ (2019) 28(1) Yearbook of Medical Informatics 128 <https://doi.org/10.1055/s-0039-1677903>; Chris Giordano, Meghan Brennan, Basma Mohamed, Parisa Rashidi, François Modave and Patrick Tighe, ʻAccessing Artificial Intelligence for Clinical Decision-Making’ (2021) 3 Frontiers in Digital Health 645232 <https://doi.org/10.3389/fdgth.2021.645232>; Christopher Kelly, Alan Karthikesalingam, Mustafa Suleyman, Greg Corrado and Dominic King ʻKey Challenges for Delivering Clinical Impact With Artificial Intelligence’ (2019) 17 BMC Medicine 195 <https://doi.org/10.1186/s12916-019-1426-2>.
  • 16
    Adam Bohr and Kaveh Memarzadeh, ʻThe Rise of Artificial Intelligence in Healthcare Applications’ (2020) Artificial Intelligence in Healthcare 25 <https://doi.org/10.1016/B978-0-12-818438-7.00002-2>.
  • 17
    See European Parliament, Economic Impacts of Artificial Intelligence (AI) (July 2019, European Parliamentary Research Service) 1.
  • 18
    See University of Queensland and KPMG, Trust in Artificial Intelligence: A Five Country Study (March 2021).
  • 19
    See British Medical Association, ʻMedical Training Pathway’ (BMA Website, 2021) <https://www.bma.org.uk/advice-and-support/studying-medicine/becoming-a-doctor/medical-training-pathway>.
  • 20
    Montgomery (n 10).
  • 21
    Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 1.
  • 22
    Cassidy v Ministry of Health [1951] 2 KB 343.
  • 23
    Artificial Intelligence Act (n 21).
  • 24
    Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.
  • 25
    Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press 2020).
  • 26
    European Commission, ʻProposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to Artificial Intelligence (AI Liability Directive)’ (COM(2022) 496 final).
  • 27
    Artificial Intelligence Act (n 21) Article 50.
  • 28
    Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC [2017] OJ L 117/1.
  • 29
    Artificial Intelligence Act (n 21) Article 3(3) and (4).
  • 30
    European Commission (n 26) Article 3(5).
  • 31
    ibid, Article 3(5).
  • 32
    ibid, Article 4.
  • 33
    The European Consumer Organisation (BEUC), Proposal for an AI Liability Directive (BEUC Position Paper, 2023) 2.
  • 34
    Regulation (EU) 2016/679 (n 24) Article 22.
Copyright © 2024 Author(s)

This is an open access article distributed under the terms of the Creative Commons CC-BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).