1. Introduction

The healthcare industry is being revolutionised by artificial intelligence (AI), as the latter finds increasing applications in various healthcare settings. However, there are challenges associated with current AI systems that may hinder their successful implementation in healthcare, in particular, the lack of transparency and explainability of some AI systems, which can undermine their effectiveness and reduce their utilisation in the field.

This paper distinguishes between transparency and explainability, highlights the most pressing issues raised by each of these features and provides a critical analysis of their impact on healthcare delivery. One of the main difficulties in this domain is the lack of clear concepts and uniform approaches. Thus, different studies might be talking about the same things but using different terms, or about very different realities but invoking the same concepts. Due to the lack of agreement between experts, the paper adopts a definition of transparency and explainability that differentiates between them. Based on this distinction, the paper analyses the main issues raised in the framework of healthcare delivery, patient harm, and the development of safe medical technologies.

What follows is a short piece on the concepts of transparency and explainability in the newly approved AI Act (AIA), the first comprehensive AI regulation in the entire world.

2. AI Transparency

Transparency fundamentally involves disclosing information about the AI system. Such information includes, in particular: i) its purpose (what it is designed to do and what outcomes it is intended to achieve); ii) the data used to train it (whether they are non-personal or personal data and, in the latter case, whether they are sensitive data or not); iii) the underlying algorithms (ie how they were developed and how they work); iv) its performance (performance metrics of the AI system – including accuracy, precision, and recall – and the metric to calculate them); v) the biases (how those biases were identified and how they are being addressed); vi) the human oversight (not only if it is in place, but also who is responsible for monitoring and controlling AI’s use); vii) the legal compliance (what are the applicable norms and what measures are out in place to comply with those norms); viii) and how the AI works (what is the underlying reasoning).

The latter refers to explainability, which can thus be considered a part of transparency, broadly speaking. Their scopes do not exactly coincide, as transparency envisages a larger dimension that goes far beyond the way AI operates. Although these concepts are distinct, they ultimately result in similar issues, and on some occasions, they are even used as synonyms. Based on this perspective, this paper addresses transparency as the disclosure of information about the AI system but excluding its modus operandi, thus excluding explainability, which will be treated as a separate concept.

Frequently, AI developers decide not to comply with transparency requirements. One of the main reasons is that they want to keep their competitive advantage. AI developers tend to view their algorithms and decision-making processes as something that makes them prevail over their competitors and may not want to share this information with the latter, especially when it involves intellectual property rights. Safeguarding competitive advantage often requires the concealment of information also from the competent supervising authorities and the general public. Another possible cause for omitting information is the fear of non-compliance with the very detailed and complex legal net in place in some parts of the world, notably in the European Union (EU).

Several suggestions have been proposed to increase the transparency of AI technologies. One of the most promising approaches is the AI passport. It refers to a standardised document or certificate that provides information about an AI system’s capabilities, performance, and ethical considerations. The concept of an AI passport aims to disclose relevant information about the AI system and therefore reinforce accountability in the development and deployment of AI systems, by providing relevant stakeholders with a clear understanding of the system’s properties and limitations. The idea has gained traction among some experts and organisations as a potential tool for improving the transparency of AI systems. However, it has not been included in the AIA.

3. AI Explainability

Explainability – or the lack of it – refers to a characteristic shared by several AI systems: opacity. Three distinct categories of AI systems are commonly recognised in this regard.

There are explainable AI systems – often referred to as ʻwhite boxes’ – that enable users to understand and articulate the ʻbehaviour’ of the AI. While these systems offer justifications and accountability, their capabilities are limited.

There are also semi-transparent AI systems, known as ʻgrey boxes’, which lie between opaque and fully transparent. These systems provide some insight into their processes but do not grant complete access.

Lastly, there are opaque AI systems, famously known as ʻblack boxes’, where neither the user nor the developer can access the internal workings that produce the system’s outputs. These models correlate specific data to generate output, but the complexities of this process make it challenging for data scientists, programmers, and users to understand how the model reaches its conclusions. Although these systems are often highly accurate, this advantage comes with a price. The ʻblack box problem’ can be described as the inability to comprehend the decision-making processes of an AI system, leading to an inability to anticipate its decisions or outputs and the consequent shortcomings in terms of accountability and responsibility. The challenge of black-box AI is especially prominent regarding deep learning and methodologies reliant on artificial neural networks, where layers of hidden nodes process inputs and produce outputs without revealing their internal workings. The secrecy inherent to their internal operations (lack of transparency) adds an extra layer of complexity to the already intricate realm of AI.

4. Transparency and Explainability in EU Regulations

In the EU, one of the most pressing concerns regarding legal compliance revolves around the General Data Protection Regulation (GDPR). The GDPR imposes numerous requirements on the processing of personal data, particularly concerning sensitive data, such as health data. This is especially critical for AI dealing with healthcare-related data, which are considered highly sensitive.

Looking to the future, the approval of the AIA will introduce another layer of demands to the already intricate European legal framework. This is particularly relevant for high-risk AI applications, a category that will encompass most AI systems used in healthcare, as many of them are considered medical devices, which, when AI-based, will fall under the high-risk category, as defined by Article 6 of the AIA.

4.1 Transparency in EU Law

The principle of transparency is one of the core principles stated in Article 5(1)(a) of the GDPR. Under the GDPR, individuals have the right to be informed about the processing of their personal data, including when AI systems are involved. Specifically, Article 13 and Article 14 of the GDPR outline the requirements for providing individuals with transparent information. The GDPR emphasises the importance of transparency in AI systems, ensuring that individuals are aware of how their personal data are processed, the involvement of AI technologies, and the potential impact of automated decision-making on their rights and freedoms.

Article 22 of the GDPR is the norm more closely related to AI. It grants individuals the right not to be subject to automated decisions, including profiling, which significantly affects them, unless certain conditions are met (see also Recital 71 of the GDPR). Those conditions – that typically involve explicit consent, contractual necessity, or legislative obligations – do not encompass transparency requirements. Still, when AI transparency is discussed in relation to the GDPR, Article 22 is usually cited.

In addition to the legal framework for data protection, another set of norms to consider is the one related to AI. It all started with the European Commission’s white paper on AI and the European Parliament’s framework of ethical aspects of AI, both emphasising the importance of transparency in legal and ethical frameworks, respectively. Building on these documents, the EU lawmakers managed to agree on a legal framework for AI.

The AIA outlines transparency requirements in different parts of its text. Recital 71 highlights the importance of maintaining comprehensive records and documentation throughout the entire development and use of AI systems to ensure proper oversight and guarantee transparency.

Article 13 specifically addresses transparency in high-risk AI systems, specifying the type of information that should be provided, the form in which it should be presented, the intended recipients of the information (AI system users), and the responsibility of AI providers to deliver the information. The inherent complexity of AI also plays a role here. AI systems can be so incredibly complex that it might be challenging to provide transparent information.

It seems that the legally imposed level of transparency is not always the same but varies according to the specific AI system, as it results from the (enigmatic) expression ʻappropriate type and degree of transparency’ (Article 13(1) of the AIA). This raises an immediate query regarding the entity responsible for evaluating and determining the extent of such transparency. Anticipating the forthcoming clarification, it is likely that during the phase of market admission, an assessment body entrusted with conformity evaluation will bear the responsibility of evaluating the transparency of these systems.

Article 26(5) highlights that deployers (professional users) should oversee the functioning of high-risk AI systems based on the provided instructions for use. Consequently, transparency requirements oriented towards compliance should enable users to effectively monitor the operation of the AI system, establishing a crucial interplay between user empowerment and compliance.

Another reference to transparency can be found in Annex IV, which specifies the technical documentation required by Article 11(1) for high-risk AI systems. Such documentation must explain ʻthe description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system’ (para 2(c) of Annex IV), a demand that includes traits of both transparency and explainability.

4.2 Explainability in EU Law

Article 22 of the GDPR, which addresses automated decision-making, including profiling, grants the data subject the right to ʻmeaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing’ (thus, explainability). This idea also results from Articles 13(2)(f), 14(2)(g), and 15(1)(h) of the GDPR. This information is considered ʻempowering’ as it is crucial and instrumental in enabling individuals to exercise the rights outlined in Article 22, including the ability to express their views on the automated decision and contest it if needed.

Explainability plays a crucial role in the AIA, though the concept had little relevance in the original version of the AIA. The most important segments in this regard are in Recital 72, Article 13(1) and Annex IV, which stipulate that AI systems with high-risk levels must be constructed and formulated in a manner that guarantees sufficient transparency for users to comprehend the system’s outcomes and utilise them correctly. While the focus is on transparency, the subsequent explanation indicates the underlying intention of ensuring explainability. Article 13(1) does not merely demand transparency (the word is even mentioned in the title) but also specifies that ʻhigh-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately’, which clearly refers to explainability.

5. How Does the Lack of AI Transparency and Explainability Affect Healthcare Delivery?

5.1 Accountability

5.1.1 The Product Liability Directive

Accountability regarding the use of AI in the medical field is closely tied to transparency. To illustrate, let us consider a scenario where a patient suffers significant harm during a medical procedure involving both a human doctor and an AI system. Deprived of information about the AI system, including the data used to train it, a court would be unable to pinpoint the source of harm inflicted upon the patient: was it due to insufficient or flawed data or because the doctor misused the AI or misinterpreted its results?

Similarly, explainability also plays a crucial role in the assessment of accountability. The more opaque and mysterious the AI algorithm is, the more challenging it becomes to determine the cause of errors related to patient outcomes. For instance, if a medical AI system is employed to diagnose a patient with a specific condition, but later the diagnosis is found to be incorrect, it may prove difficult to ascertain whether the error originated from the AI system itself or from the way it was used.

In summary, transparency and explainability are essential for establishing accountability in cases involving AI in healthcare. Without access to information about the AI system and a clear understanding of its inner workings, identifying the causes of errors or harm becomes significantly more complex.

The EU is not oblivious to these issues and is creating specific norms whose target is to address compensation for AI systems that intend to consider their challenging features.

The revised version of the Directive on Liability for Defective Products (Revised Product Liability Directive, RPLD)  seeks to repeal and replace the 1985 Product Liability Directive (Directive 85/374/EEC) with an updated framework to better reflect the digital economy. The specificity of the Product Liability Directive is that it establishes a strict liability regime for situations that meet its criteria. The Explanatory Memorandum of the RPLD confirms that AI systems are ʻproducts’ – something that was not clear in the original version – and thus come under its scope. As a result, its strict liability regime is expressly extended to AI, clarifying some pre-existing doubts in this regard.

Under this legal regime, the plaintiff is still required to substantiate the product’s defect, the incurred harm, and the link between the defect and the harm (causation) but the need to establish the manufacturer’s negligence is eliminated (Article 9(1) RPLD). In some cases, the defect itself can be presumed (Article 9(2) RPLD), or the causal link (Article 9(3) RPLD), or even both (Article 9(4) RPLD), even though these are rebuttable presumptions (Article 9(5) RPLD). For these reasons, the RPLD is especially valuable for items recognised for their opacity and lack of transparency.

5.1.2 The AI Liability Directive

The other legal document under discussion is the proposed AI Liability Directive (AILD). It establishes specific solutions concerning culpability and causation to simplify legal claims seeking compensation for harm suffered by end users during the use of high-risk AI systems. Its norms apply to a wide range of individuals, including providers, developers and professional users (terms to be interpreted in accordance with the definitions outlined in the AIA, as stated in Article 2(4) of the AI Liability Directive), the latter including hospitals and healthcare staff.

Under the proposed AI Liability Directive, individuals are granted the possibility to seek a court mandate compelling a defendant to reveal relevant evidence concerning a high-risk AI system, a move aimed at substantiating allegations of wrongdoing. As mentioned, the AIA outlines specific mandates for documentation, information, and record-keeping concerning high-risk AI systems but does not enable harmed individuals to have access to that information or to file a lawsuit based on the lack of information. Within the context of the proposed AI Liability Directive, courts can authorise an injunction to have access to such information, as long as the evidence is deemed essential and reasonable to support the claim and provided the claimant has taken all reasonable measures to gather the pertinent evidence independently. If the injunction is not complied with, the proposed AI Liability Directive allows for a presumption of culpability (the text of the Directive calls it ʻpresumption of non-compliance’ – Article 3). Another way the proposed Directive alleviates the victims’ burden of proof is by introducing a ʻpresumption of causality’ (Article 4), ie in case the injured party can demonstrate that a party failed to fulfil a pertinent obligation linked to the harm and that there is a reasonable probability of a causal connection between the harm and the AI’s functioning, the Court holds the authority to infer that this breach of legal obligations led to the incurred damage. This legal regime will be especially useful for AI systems known for their lack of transparency and explainability.

5.1.3 Possible consequences of these new legal regimes in terms of transparency and accountability in health

Assuming that the final version of the RPLD is similar to the current one, one can predict that developers/manufacturers of AI systems used in healthcare delivery will be systematically held accountable for harm suffered by patients during the course of it. To exclude such an outcome, there are some possibilities. Assuming the manufacturer followed mandatory safety regulations (Article 6(1)(f) RPLD) and patients were adequately informed about the AI’s usage and associated risks (Article 6(1)(h) RPLD), these opaque medical AI systems might not be categorised as defective products subject to strict liability for manufacturers under national law under the assumption that the non-interpretable reasoning process of the AI is beyond the manufacturer’s control (Article 6(1)(e) RPLD). Another option is to invoke one of the exceptions to liability listed in Article 10. However, if the AI system is indeed a black box, the defence will be challenging. Even if the defendants have access to the general information pertaining to AI (transparency), not even they will be able to understand how the system works.

The proposed AILD might also not be of much help, as Recital 15 states that when damage results from a human assessment of the AI system’s output, this Directive will not apply because in that case the damage can be traced back to a human. The question is, however, that if AI is a black box, the plaintiff will only be allowed to resort to general liability rules, without any presumptions to assist his/her claim.

5.2 Biases

Biases refer to malfunctions of AI systems that might ultimately result in discriminatory outcomes. Biases can originate from two primary causes: coding and data. Coding is done by humans, and human coders may inadvertently incorporate their biases into the algorithms they create, thus making the AI system equally biased. Data is the driving force of AI, but when the training data reflect existing biases in society, the AI algorithms will unintentionally adopt and perpetuate those biases in their outputs. The flawed training data problem is often referred to as ʻgarbage in, garbage out’.

Let’s consider an AI system used to order the list of patients waiting for surgical interventions, a list based not only on chronological criteria (first come, first served basis) but on several other criteria related to the patient and/or the medical condition. Suppose patient X is put at the end of the list and remains there over time, while all newcomers are systematically placed in front of him. The reason might be that all the other patients have more severe medical conditions than patient X, making their placing on the list reasonable and lawful. However, it might also be the case that because of the family name of patient X or his postal code (and thus the neighbourhood where he lives), or both elements in conjunction, the AI system inferred that this person belongs to an ethnic minority or has a low economic and social stratum. Based on these inferences, the AI system portrayed patient X as someone with low economic resources, no health insurance, and no frequent access to health services, with untreated comorbidities, and usually non-compliant with medical recommendations. In sum, a patient for whom the surgical intervention would most likely have fewer chances of success compared to other patients, who were therefore given priority. This second hypothesis could take place because the data used to train the AI system systematically portrayed patients with the characteristics that patient X apparently has as a ʻproblematic type of patient’. The reason for this could be that the data used to train the AI system were biased, as they carried preconceived ideas about certain categories of individuals and about how they behaved. The lack of information about the data used (non-transparency) prevents the identification of the cause of the AI’s decision.

Moreover, the lack of explainability makes it difficult to understand the reasoning behind the AI’s decision and identify its biases. In the aforementioned example, if the AI system responsible for prioritising medical interventions operates as a black box without providing the steps taken to reach decisions, it will be challenging to determine if there is unfair discrimination or legitimate reasons for prioritising one patient over another.

Another type of bias can occur when the training data only represent a specific category of individuals, leading the AI system to fail when dealing with other populations. For example, if an AI clinical decision support software for skin cancer treatment is primarily trained with data from Caucasian patients, its recommendations may be less accurate for patients with other skin tones (Asian or black populations), thus failing to correctly diagnose the latter. To address these issues, it is crucial to have transparency regarding the training data, ensuring they represent the entire population without carrying biases. However, this requires knowledge of the data source and its nature, which is not always available, leading to undetectable biases that remain unresolved.

5.3 Erroneous Results

AI biases are not the sole cause of mistaken results; another significant concern is the susceptibility of AI to manipulation and deception. AI systems often achieve results that are as good, or even better, than the ones reached by doctors in various medical specialities. However, no technology is flawless, as demonstrated by tests conducted on image classifiers that still struggle with distinguishing between cats and frogs. While this example may seem trivial, similar errors in medical acts could have far more severe consequences.

The absence of explainability and transparency further compounds the challenges of troubleshooting and debugging AI systems, ultimately leading to reduced accuracy. When an AI model produces incorrect predictions, determining the underlying cause becomes problematic without a clear understanding of the decision-making process employed by the model. The possession of information about the AI system serves as a proactive measure to prevent issues from arising. Only by reaching a deeper understanding of the system’s behaviour, the data and the algorithms used (among other information) will it be possible to detect potential vulnerabilities. This, in turn, will allow timely error detection and correction (debugging) and, thus, will contribute to more accurate AI.

5.4 Justification of Medical Decisions

In the medical field, it is essential to comprehend the reasoning behind medical decisions. This aim requires transparency and, above all, explainability.

From the perspective of physicians, there is a strong desire to understand why a particular diagnosis is chosen over others or why a specific treatment is recommended. An example of the significance of explainability can be seen in the case of IBM’s Watson for Oncology, which failed to gain the trust of human doctors because it could not provide reasons for differing from human-made medical diagnoses in specific situations, leading to a lack of confidence in the system.

Explanations are also crucial for patient’s informed consent, a fundamental aspect of healthcare that can hold legal consequences if omitted. Informed consent aims to facilitate decision-making between doctors (who possess specialised knowledge) and patients (who lack medical expertise), trying to create an equilibrium (that naturally does not exist) between both. However, when decisions are made by AI or based on an AI system, the doctor likely will not have enough information about the system or how it operates, making it impossible to provide that information to the patient. For instance, if an AI system recommends a treatment plan based on a patient’s medical history, the patient may require an understanding of how the AI system arrived at that recommendation (explainability), including the data used and the reliability of the system’s predictions (transparency). Without this information, the patient may struggle to make an informed decision about his/her treatment plan.

5.5 Lack of Trust

The absence of explainability and transparency in AI systems can have a significant impact on patient trust in the healthcare system. In healthcare delivery, the human element is crucial to establishing a unique connection between professionals and patients based on specific values and responsibilities. This connection is typically fostered through a patient-centred approach, where patient autonomy is respected, informed choices are encouraged, and alignment with patient values and preferences is prioritised.

However, patients may be hesitant to rely on AI systems if relevant information about the systems is hidden (lack of transparency), including how the systems operate (lack of explainability). In such scenarios, patients will lose trust in the system’s decision-making process. This lack of trust can lead to negative outcomes on both collective and individual levels.

A major outcome is that patient compliance may decrease. Patients may be less inclined to adhere to medical recommendations provided by AI systems if they lack trust in the system’s accuracy or fairness. Non-compliance can result in poorer health outcomes and increased healthcare costs. It can lead to missed opportunities for preventative care, early pathology detection, and timely treatment.

The lack of trust can contribute to increased litigation. If patients experience negative health outcomes due to medical AI, they may be more inclined to pursue legal actions against healthcare providers (and also AI manufacturers). This rise in litigation can lead to higher legal expenses, damage the reputation of doctors and medical institutions and foster defensive medicine.

Ultimately, these factors combined can hinder innovation. If AI becomes underused, less investment may be directed towards the development and implementation of AI-based healthcare services, resulting in limited innovation and missed opportunities to improve patient outcomes.

6. The Challenges Raised by Non-Transparent and Non-Explainable AI

Compliance with transparency requirements is primarily a matter of adherence to legal obligations. From a technical standpoint, AI developers and system manufacturers can generally meet the demands for transparency since it is primarily influenced by business strategies and legal mandates rather than presenting a significant technical challenge. Technically, it is feasible to meet transparency requirements.

Explainability, on the other hand, is more complex. Unlike transparency, explainability is predominantly a technical rather than a legal matter, and providing explanations for many AI systems remains difficult given the current state of technology. The technical community recognises the importance of explainability, especially in critical fields like medicine, where understanding the basis of AI-generated diagnoses or treatment recommendations is essential for trust and ethical practice. However, up until now the problem has not been fully solved.

The concept of explainable AI (XAI) has emerged as a promising area of research aiming to bridge this gap by developing AI systems that are not only effective but also capable of providing understandable explanations for their decisions. XAI seeks to make the AI decision-making process more transparent, fostering trust among users and facilitating regulatory compliance.

Despite potential developments to enhance explanation capabilities in black- box AI systems used in medicine – namely, XAI – there will always be an inherent level of obscurity unless all AI becomes fully explainable (which seems unlikely, at least in the near future). There remains a level of obscurity in how these systems arrive at certain conclusions, posing a challenge for validation, accountability, and ethical oversight.

In essence, while it is feasible to meet transparency requirements from a technical and legal standpoint, the journey towards fully explainable AI is fraught with inherently technical challenges. The quest for explainability in AI, particularly in sensitive sectors like healthcare, underscores the need for ongoing research and innovation. It highlights the imperative to develop AI systems that not only perform complex tasks with high levels of accuracy but also do so in a manner that is accessible and understandable to human users. This balance is critical for fostering trust, ensuring ethical use, and facilitating the responsible integration of AI into society.

  • 1
    Nuffield Council on Bioethics, ʻArtificial Intelligence (AI) in Healthcare and Research’ (Nuffield Council on Bioethics 2018) <https://www.nuffieldbioethics.org/wp-content/uploads/Artificial-Intelligence-AI-in-healthcare-and-research.pdf> accessed 20 April 2024.
  • 2
    Also distinguishing between the two, see, amongst many others, Rita Matulionyte, Paul Nolan, Farah Magrabi, Amin Beheshti, ʻShould AI-Enabled Medical Devices Be Explainable?’ (2022) 30(2) International Journal of Law and Information Technology 151 <doi.org/10.1093/ijlit/eaac015>.
  • 3
    Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 1. The AIA is the product of a development that started with the European Commission (European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts), followed by versions from the Council (Council of Europe, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – General approach, 25 November 2022 (14954/22)) and the European Parliament (European Parliament, Artificial Intelligence Act, Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), until this last version.
  • 4
    An overview of the original version of the AIA in Vera Lúcia Raposo, ʻEx machina: preliminary critical assessment of the European Draft Act on artificial intelligence’ (2022) 30(1) International Journal of Law and Information Technology, 88 <doi.org/10.1093/ijlit/eaac007>.
  • 5
    Cf Heike Felzmann, Eduard Forsch Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux, ʻTransparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns’ (2019) 6(1) Big Data & Society <doi.org/10.1177/2053951719860542>.
  • 6
    This is the position of Anastasiya Kiseleva, Dimitris Kotzinos and Paul De Hert, ʻTransparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations’ (2022) 5 Frontiers in Artificial Intelligence <doi:10.3389/frai.2022.879603>.
  • 7
    See, for instance, Jose Bernal and Claudia Mazo, ʻTransparency of Artificial Intelligence in Healthcare: Insights from Professionals in Computing and Healthcare Worldwide’ (2022) 12 Applied Science <doi.org/10.3390/app122010228>; Thomas Quinn and others, ʻThe Three Ghosts of Medical AI: Can the Black-Box Present Deliver?’ (2022) 124 Artificial Intelligence in Medicine <doi: 10.1016/j.artmed.2021.102158>.
  • 8
    European Parliament, Artificial Intelligence in Healthcare – Applications, Risks, and Ethical and Societal Impacts (May 2022) 48–50, <https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf> accessed 24 June 2024.
  • 9
    Cf Octavio Loyola-Gonzalez, ʻBlack-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view’ (2019) 7 IEEE access 154096 <doi.org/10.1109/ACCESS.2019.2949286>.
  • 10
    Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
  • 11
    According to Article 6(1)(a) of the AIA, Annex I incorporates EU Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC and Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU.
  • 12
    Felzmann and others (n 5).
  • 13
    Scott Zoldi, ʻGDPR: Time to Explain your AI, Financier Worldwide’ (Financier Worldwide, August 2017) <https://www.financierworldwide.com/gdpr-time-to-explain-your-ai> accessed 12 January 2024.
  • 14
    European Commission, ʻWhite Paper on Artificial Intelligence: A European Approach to Excellence and Trust’ (2020) <https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf>.
  • 15
    European Parliament, Resolution of 20 October 2020 with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies (2020/2012(INL)), <https://www.europarl.europa.eu/doceo/document/TA-9-2020-0275_EN.html>.
  • 16
    This piece will only address transparency requirements for high-risk AI systems, as most AI systems used in healthcare follow in this category. See also Karla A Cepeda Zapata and others, ʻAnalysis of the Classification of Medical Device Software in the AI Act Proposal’ (2023) In Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI)) <https://ceur-ws.org/Vol-3456/paper1-2.pdf>.
  • 17
    The same idea in Maciej Gawroński, ʻExplainability of AI in Operation – Legal Aspects’ (LinkedIn, April 18, 2023) <https://www.linkedin.com/pulse/explainability-ai-operation-legal-aspects-maciej-gawro%C5%84ski/> accessed 20 April 2024.
  • 18
    ʻDeployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72’.
  • 19
    Maria Bottis, Fereniki Panagopoulou-Koutnatzi, Anastasia Michailaki, and Maria Nikita, ʻThe right to access information under the GDPR’ (2019) 3(2) International Journal of Technology Policy and Law 131 <https://doi.org/10.1504/IJTPL.2019.104950>.
  • 20
    The equivalence between transparency and explainability can also be seen in the 2020 White Paper on AI (European Commission (n 14)).
  • 21
    Rebecca Williams and others, ʻFrom transparency to accountability of intelligent systems: Moving beyond aspirations’ (2022) 4 Data & Policy e7 <doi.org/10.1017/dap.2021.37>.
  • 22
    Proposal for a Directive of the European Parliament and of the Council on liability for defective products, COM/2022/495 final <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0495> accessed 4 June 2024.
  • 23
    Teresa Rodríguez de las Heras Ballell, ʻThe revision of the product liability directive: a key piece in the artificial intelligence liability puzzle’ (2023) 24 ERA Forum 247, <doi.org/10.1007/s12027-023-00751-y>.
  • 24
    Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), COM/2022/496 final, <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0496> accessed 4 June 2024.
  • 25
    Philipp Hacker, ʻThe European AI liability directives – Critique of a half-hearted approach and lessons for the future’ (2023) 51 Computer Law & Security Review, 10587, <doi.org/10.1016/j.clsr.2023.105871>.
  • 26
    This includes all AI systems (Recital 28 of the AI Liability Directive) and not merely the ones classified as high-risk.
  • 27
    Mindy Nunez Duffourc and Sara Gerke, ʻThe proposed EU Directives for AI liability leave worrying gaps likely to impact medical AI’ (2023) 6(1) Npj Digital Medicine 1, 4 <doi.org/10.1038/s41746-023-00823-w>.
  • 28
    The European Data Protection Supervisor also expressed concerns about this norm in European Data Protection Supervisor, ʻOpinion 42/2023 on the Proposals for two Directives on AI liability rules’ (2023), 9–10 <https://www.edps.europa.eu/system/files/2023-10/23-10-11_opinion_ai_liability_rules.pdf> accessed 4 March 2024.
  • 29
    Samara J Donald, ʻDon’t Blame the AI, it’s the Humans who Are Biased’, Towards Data Science (2019) <https://towardsdatascience.com/dont-blame-the-ai-it-s-the-humans-who-are-biased-d01a3b876d58>, accessed 12 April 2024.
  • 30
    Monique F Kilkenny and Kerin M Robinson, ʻData Quality: “Garbage In – Garbage Out”’ (2018) 47(3) Health Information Management Journal, 103 <doi:10.1177/1833358318774357>.
  • 31
    Vikas Hassija and others, ʻInterpreting Black-Box Models: A Review on Explainable Artificial Intelligence’ (2024) 16 Cognitive Computation, 45, 47 <doi.org/10.1007/s12559-023-10179-8>.
  • 32
    Yuyang Liu and others, ʻArtificial Intelligence for the Classification of Pigmented Skin Lesions in Populations with Skin of Color: A Systematic Review’ (2023) 239(4) Dermatology 499 <doi.org/10.1159/000530225>.
  • 33
    Eyal Zimlichman, Wendy Nicklin, Rajesh Aggarwal and David W. Bates, ʻHealth Care 2030: The Coming Transformation’ (2021) 2(2) NEJM Catalyst Innovations in Care Delivery <doi:10.1056/CAT.20.0569>.
  • 34
    James Wexler, ʻGoogle AI Blog, Facets: An Open Source Visualization Tool for Machine Learning TrainingData’ (2017) <https://ai.googleblog.com/2017/07/facets-open-source-visualization-tool.html> accessed 11 January 2024.
  • 35
    Amina Adadi and Mohammed Berrada, Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)’ (2018) 6 IEEE Access 52138, 52143 <doi:10.1109/ACCESS.2018.2870052>.
  • 36
    Juan M Durán, ʻDissecting Scientific Explanation in AI (sXAI): A Case for Medicine and Healthcare’ (2021) 297 Artificial Intelligence <doi.org/10.1016/j.artint.2021.103498>.
  • 37
    Kiseleva, Kotzinos and De Hert (n 6).
  • 38
    Ryann L Engle and others, ʻEvidence-Based Practice and Patient-Centered Care: Doing Both Well’ (2021) 46(3) Health Care Manage Rev 174 <doi:10.1097/HMR.0000000000000254>.
  • 39
    Raquel González-Alday and others, ʻScoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine’ (2023) 13(9) Applied Sciences, 10778 <doi.org/10.3390/app131910778>.
  • 40
    Explaining this outcome in mental care, Surjodeep Sarkar and others, ʻA review of the explainability and safety of conversational agents for mental health to identify avenues for improvement’ (2023) 6 Frontiers in Artificial Intelligence <doi.org/10.3389/frai.2023.1229805>.
  • 41
    Julia Amann and others, ʻTo explain or not to explain? – Artificial intelligence explainability in clinical decision support systems’ (2022) 1(2) PLOS Digital Health <doi.org/10.1371/journal.pdig.0000016>.
  • 42
    Sajid Ali and others, ʻExplainable Artificial Intelligence (XAI): What we Know and what is Left to Attain Trustworthy Artificial Intelligence’ (2023) 99(3) Information Fusion <doi.org/10.1016/j.inffus.2023.101805>.
Copyright © 2024 Author(s)

This is an open access article distributed under the terms of the Creative Commons CC-BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).