1. Introduction

Laws of robotics have been part of science fiction literature at least since Isaac Asimov famously formulated a few basic Laws of Robotics in the 1940s. Since then, the idea of regulating robots has transitioned from science fiction to law-making in the European Union (EU). Isaac Asimov’s Laws of Robotics are embedded in the machine and postulate basic principles for the actions and inactions of robots, emphasising the need to avoid harm to humans and humanity as a primary ʻlegal’ principle. This principle of avoiding harm is also a cornerstone of the EU’s recently adopted Machinery Regulation (MR), although the latter is addressed at humans, not robots. Although this law focuses more broadly on machinery, its relevance to robots, as a type of machinery, is significant in the broader context of ideas about regulating human-robot interaction (HRI).

However, although the MR covers specific aspects of HRI, it falls short of offering a comprehensive regulatory framework for it. This is noteworthy in light of the ambitious vision of the ʻCivil Law on Robotics’ envisaged by the European Parliament in 2017, requesting a more comprehensive legislation for smart robots. The Parliament’s focus was future oriented, introducing the concept of ʻsmart robot’. While the definition of this concept is not settled, the Parliament identifies key characteristics: the capability for autonomy, the potential for self-learning from experience and interaction (an optional criterion), at least minimal physical support, the ability to adapt behaviours and actions to the environment, and the distinction of being non-biological. The concept ʻsmart robot’ is not yet present in legislation, but it has been adopted in some literature as synonymous with ʻAI-enabled’ robots. An alternative approach to conceptualising a ʻsmart robot’ involves focusing on its constituent characteristics, such as autonomy and self-learning, which will be further explored below. Theoretically, the concept covers a wide range of machines, including cars, drones, personal service robots, and medical robots. For the purposes of this article, a pertinent example is a service robot that autonomously navigates a hospital environment, interacting with both patients and staff members while performing various tasks.

The 2017 EP resolution and the MR present a stark contrast in their approaches to regulating robotics. Essentially, the MR could be seen as a part of the response to the Parliament’s 2017 call for a law on smart robotics and HRI, yet it represents a limited and conservative fulfilment of that vision, reaching full applicability only in 2027. The 2017 resolution was part of a broader, more forward-looking regulatory initiative aimed at integrating a new legal framework for robots across various societal domains, such as public spaces, hospitals, and care homes. Despite its futuristic outlook and high expectations for then-emerging technologies, the widespread deployment of robots in public and social spaces has not materialised as rapidly or extensively as anticipated. Meanwhile, the development of the EU’s legal framework for robotics and AI has accelerated with the Artificial Intelligence Act (AIA) and an updated MR that addresses some aspects of smart robotics and HRI.

The MR is a highly technical, detail-oriented and lengthy legislative instrument, spanning over more than 100 pages in the Official Journal. The MR’s predecessor, the Machinery Directive, has generated relatively limited attention in legal scholarly literature, as opposed to engineering literature. This may be a natural consequence of the fact that it has also served as an entry point for over a thousand technical standards that address machinery safety as soft or even hard law. Nevertheless, this paper argues that the MR deserves extensive legal scholarly attention, especially due to its interface with the widely discussed AIA. Moreover, safety is a crucial concern in HRI and must therefore be a key component of a broader regulatory approach for smart robotics.

The MR has already sparked discussions in the realm of smart robotics regulation. Assertions within certain segments of literature, suggesting that the MR offers minimal regulatory guidance for the safety of smart robotics, necessitate a re-evaluation. This paper, therefore, focuses on dissecting the MR to spotlight specific provisions that are crucial for ensuring the safety of smart robots. Despite its length, the MR contains only a few such stipulations. Identifying these requirements necessitates adapting to its specific terminology, as it does not explicitly use the terms ʻsmart robots’ or ʻAI’, but rather addresses various combinations of ʻautonomous’ and ʻself-learning’ machinery.

Consequently, this paper investigates how the MR conceptualises and regulates the integration of AI into robotic systems. It assesses the relationship between the MR and the AIA to ascertain if their substantive regulations align or differ. Another relevant dimension is assessing the MR’s protective ambit, especially its focus on health and safety compared to the AIA’s broader consideration which includes human rights.

On closer scrutiny, the MR addresses three initial challenges regarding smart robotics and HRI. First, it tackles the issue of a robot’s autonomy and its supervisability in autonomous modes, emphasising that supervision is crucial when using autonomous robots. Secondly, it deals with the robot’s decision-making involving machine learning, by setting limits on self-learning to prevent unauthorised or unexpected behaviours. Lastly, it addresses human-robot interactions, transitioning from isolating hazardous machines to safeguarding spaces shared by humans and collaborative robots (ʻcobots’). This is yet another term not used in the MR, but it contains requirements for a subset of such machines. While ʻsmart robotics’ emphasises the machine’s abilities to learn and act independently from direct human control, the term ʻcobot’ hinges on its interplay with humans. In practice, the future of robotics will likely involve a wide range of ʻsmart cobots’, such as personal service robots, medical robots, drones and autonomous vehicles. Although these may count as ’machinery,’ as further elaborated below, not all smart cobots fall within the MR’s scope. For example, certain vehicles and household appliances are excluded.

The paper is structured as follows. Section 2 initially outlines the development and implications of regulatory frameworks for smart robotics and HRI, beginning with the idea of regulating smart robots through Asimov’s Laws. It then traces their influence on contemporary regulatory discussions, especially in light of the visionary approach taken by the European Parliament. In Section 3, the discussion proceeds to delineating the MR within the EU’s product safety regulatory scheme, particularly highlighting the ʻnew regulatory approach’ and key elements such as CE marking and the significance of third-party assessments. Section 4 turns to terminological challenges encountered in the MR, especially in its broad categorisation of machinery and various terminology used to describe smart robotics. The focus in Section 5 shifts to how the MR deals with smart robotics and HRI, discussing the utility of specific regulatory requirements. Section 6 investigates the relationship between the MR and the AIA, noting their concurrent application to smart robotics. In the conclusion, the MR is viewed as an initial step in establishing a safety framework for smart robotics, setting the stage for further legislative development to address the comprehensive spectrum of challenges and opportunities presented by the integration of robotics into society.

2. The Evolving Frameworks for Smart Robotics Regulation

This section explores the transition from Asimov’s theoretical ʻLaws of Robotics’ to concrete legislative actions within the EU, highlighting the complexities and challenges of integrating HRI safety into practical regulatory frameworks for smart robotics.

Asimov’s Laws of Robotics provide an early inspiration for the legal regulation of smart robotics. These ʻlaws’, presented in a short story, suggest embedding key principles directly into robots. Despite Asimov later introducing a zeroth law to prevent robots from harming humanity, these laws have faced criticism in scholarly literature for being too robot-centric. Moreover, within his stories, the effective implementation of these laws by robots often proves challenging. However, given their frequent citation in both regulatory literature and policy discussions on robotics, especially the first two laws offer a valuable framework for analysing the safety of smart robot interactions.

Asimov’s first law mandates that a robot must not harm humans, either actively or through inaction, addressing both psychological and physical harm. Grounded in ethical considerations, it encompasses the potential for unintentional harm, like a robot accidentally colliding with a person. The risk of such harm is influenced by the robot’s software and its mode of operation—whether pre-programmed, making autonomous decisions, or controlled by humans—introducing unique safety challenges. Asimov’s second law centres on a robot’s duty to follow human commands, a principle especially relevant for autonomously acting robots, carrying specific safety implications. Thus, although Asimov’s first two laws do not specifically mention robot safety, they imply that ensuring safety is a foundational step towards broader ethical considerations.

Fast forward to 2017, when the European Parliament adopted a Resolution on ʻCivil Law of Robotics’. In it, the Parliament explicitly referred to a long history of sociotechnical imaginaries of robots: ʻfrom Mary Shelley’s Frankenstein’s Monster to the classical myth of Pygmalion, through the story of Prague’s Golem to the robot of Karel Čapek, who coined the word, people have fantasised about the possibility of building intelligent machines, more often than not androids with human features’. The Parliament also quoted Asimov’s Laws, emphasising that these laws ought to be regarded as directed at the designers, producers and operators of robots ʻsince those laws cannot be converted into machine code’. Although it remains to be seen what can be encoded in machine code, prioritising laws addressed at robot engineers is probably a wise choice.

The Parliament proposed a set of general considerations regarding various robot-related perspectives. On HRI, the Parliament noted that joint human-robot activity should be founded on predictability and directability of the robot. Especially the ability to direct the robot arguably also depends on the degree of robot autonomy, which the Parliament defined as ʻthe ability to take decisions and implement them in the outside world, independently of external control or influence’.

In law, tools under human control are usually not considered legal persons. In contrast, the Parliament noted that as robots gain autonomy, their classification as mere tools becomes questionable, arguing that this raises questions about liability. Overall, the Parliament canvassed a broad set of issues, such as principles to be observed in the development of robotics and AI, ethical principles, a new European Agency for Robotics and AI, the flow of data, standardisation, safety and security, autonomous means of transport, care robots and medical robots, human repair and enhancement. The Parliament Resolution includes a long list of requests, including one asking the Commission to ʻsubmit a proposal for a legislative instrument on legal questions related to the development and use of robotics and AI foreseeable in the next 10 to 15 years’. This relatively short time perspective is a sign of recognition of the high uncertainty about the future of robotics and AI.

In responding to this request, the European Commission has initiated various expert groups on issues such as ethical principles for AI and liability issues. Ultimately, The EP’s request to submit a comprehensive legislative proposal on all matters related to robotics and AI was answered, but in a more piecemeal approach, both through dedicated legislative acts like the AIA, as well as through updates to existing legislation, as in the case of the MR.

3. The Machinery Regulation

3.1 Introduction

This section contextualises the Machinery Regulation (MR) within the EU’s framework for product safety, designed to harmonise health and safety standards for machinery and support trade across the internal market. The MR enforces compliance with Essential Health and Safety Requirements (EHSR) and mandates CE marking and third-party assessments for higher-risk machinery or those incorporating advanced technologies. These assessments are performed by notified bodies to ensure conformity with MR standards.

3.2 The MR and the EU’s ʻNew Approach’ Framework

The MR operates within the EU’s product safety framework, designed to harmonise health and safety standards across the internal market, thereby eliminating trade barriers. This regulation embodies the ʻNew Approach’ to legislation, diverging from the ʻOld Approach’ that incorporated technical specifications directly into legal texts. Thus, the MR adopts a technology-neutral stance, setting forth Essential Health and Safety Requirements (EHSR) without mandating specific technical solutions. This leaves manufacturers the freedom to select the most appropriate technical solutions, encouraging innovation and the development of new products. A critical aspect of this approach is the use of technical standards, particularly the numerous Harmonised Standards. When manufacturers observe these Harmonised Standards, compliance with the MR’s requirements is presumed, thereby streamlining the process for demonstrating adherence to essential health and safety criteria. This method enhances flexibility and fosters innovation by allowing for a broad range of compliant technical solutions.

The MR establishes health and safety requirements for the design and construction of machinery placed on the European market. One of the significant changes in the Machinery Regulation compared to the previous Directive is its explicit intention to cover recent and forthcoming technological advancements in machinery. The update to the MD therefore includes provisions on autonomous mobile machinery, connected equipment through the Internet of Things, and some aspects of artificial intelligence where specific AI modules using learning techniques ensure safety functions.

This regulation is crucial for manufacturers and other economic operators like importers, as it ensures that machinery products meet the required health and safety requirements and can be traded without restrictions in the internal market.

3.3 CE Marking and Third-Party Assessments

For end-users of machinery, the most visible sign of compliance with the Machinery Regulation is the CE mark affixed to the product, indicating adherence to the EU’s harmonised framework for product safety. Manufacturers of machinery falling within this regulation’s scope must conduct a conformity assessment process to verify compliance with the essential health and safety requirements. Upon successful completion of this process, the machinery is granted the CE mark, signifying its eligibility for unrestricted trade within the internal market. Depending on the specific type of machinery, a manufacturer may need to comply with multiple regulatory frameworks. For instance, a medical robot could be subject to the MR, the AIA and the Medical Device Regulation (MDR), assuming these laws are applicable. Navigating these various rule-sets poses challenges, prompting efforts towards automating compliance analysis.

A key element of the MR is the role of third-party entities, known as notified bodies, in the conformity assessment process. These bodies are crucial, particularly for machinery posing greater risks or embedding advanced technologies like autonomous mobile machinery or AI systems with learning capabilities for safety functions. Notified bodies conduct independent assessments to verify machinery’s compliance with regulatory standards, playing a pivotal role in ensuring the safety and conformity of high-risk machinery categories, including certain robots. The involvement of notified bodies is significant as it also links to the AIA’s applicability criteria, as detailed subsequently.

4. Terminology

4.1 Introduction

The MR’s use of terminology, notably the broader classification of ʻmachinery’, encompasses robots, even though the term ʻrobots’ is not explicitly used. While the MR also refrains from using the term ʻsmart robot’, it revises the MD to integrate provisions for autonomous machinery and AI technologies, addressing the evolving landscape of technological advancements in robotics.

4.2 Machines versus Robots

If we are to read the MR as a law on robotics, we first face a definitional challenge, as the ʻR-word’ has no function in the Regulation. Instead, it focuses on ʻmachinery’, meaning an ʻassembly, fitted with or intended to be fitted with a drive system other than directly applied human or animal effort, consisting of linked parts or components, at least one of which moves, and which are joined together for a specific application’. Definitions of ʻrobot’ may not necessarily make use of these exact words, but it is arguably difficult to imagine a physical robot without an assembly of various parts, including a drive system. For example, ISO 8373:2021 describes a robot as ʻa programmed actuated mechanism with a degree of autonomy to perform locomotion, manipulation or positioning’. The definitions of ʻrobot’ and ʻmachinery’ have the drive system (or actuation) in common. However, the definition of ʻrobot’ contains additional requirements. Under the ISO definition, a robot must be programmed, while machinery can also be steered by a human without any programming. Moreover, the definition of robot emphasizss the robot’s degree of autonomy, ie the performance of tasks without human intervention, which is only an implicit possibility for machinery. Thus, robots defined according to ISO 8373:2021 will regularly fulfil the requirements for machinery, but not vice versa. In summary, robots will be subject to the MR, provided that no exclusions apply.

Theoretically, the concept of smart robots includes a range of autonomous machines such as cars, drones, personal service robots, and medical robots. However, in practice, the exclusions listed in Article 2(2) MR encompass a wide array of machinery, including weapons, specific vehicles, seagoing vessels, among others. For instance, robots used in amusement parks, nuclear facilities, or for artistic performances would not fall under the purview of the MR.

For this article, the definitional challenge nevertheless applies. For clarity and brevity, the text discusses the MR as robot-related requirements, even though the regulation also covers a broader spectrum of machinery and related products.

4.3 Smart Robots: AI vs Self-Learning and Autonomy

When regulating smart robots, the MR faces a second terminological challenge similar to that encountered in the broader discourse on AI regulation, especially concerning the AIA. This challenge is to define AI in legal terms that are technology-neutral, future-proof and practical. Instead of using the term ʻAI’, the MR opts for descriptors such as ʻautonomy’ and ʻself-learning’. These terms are often applied together, particularly in reference to control systems of machinery that exhibit ʻfully or partially self-evolving behaviour or logic designed to operate with varying levels of autonomy.’ This approach seems aimed at capturing the evolving nature of AI without the limitations of a fixed definition, particularly since the MR predates the AIA.

Autonomy in robotics encompasses the capability of robots to operate and make decisions independently, without human intervention in real time. This involves various levels of autonomy, from simple pre-programmed tasks to complex, adaptable behaviours that respond to the environment. A common framework for understanding autonomy, especially in the context of autonomous driving, are the SAE levels of autonomy, which range from Level 0 (no automation) to Level 5 (full automation). While this specific framework is tailored to vehicles, the concept of graded autonomy can apply to robotics broadly, indicating a spectrum from basic automated functions to fully independent operation.

In robotic movements, autonomy can manifest in two primary domains: navigation and manipulation. Navigation involves the robot’s ability to move through its environment, which could include avoiding obstacles, planning paths, or exploring new areas. Manipulation entails the robot’s interaction with objects, such as picking up items, using tools, or performing tasks that require precise control of robotic arms or manipulators.

One of the key technologies enabling autonomy, especially in learning and adapting to new situations, is machine learning, and more specifically, reinforcement learning (RL). RL allows a robot to learn from interactions with its environment by trying different actions and learning from the outcomes, optimising its behaviour over time. For example, an RL-powered robot navigating a space could learn to avoid obstacles more efficiently over time based on past encounters or adjust its path based on the dynamic changes in its surroundings. By directly targeting autonomy and self-learning, the MR adopts a relatively concrete set of concepts that should be easy to understand from a technical perspective.

In contrast, the AIA defines an AI system as ʻa machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’. Both the MR and AIA address the varying levels of autonomy, and self-learning in the MR can be seen as a form of adaptiveness. Influencing the physical environment is inherent in robots and machinery. While the MR does not explicitly refer to ʻinference from input data’, this aspect is likely embedded in self-learning systems, and robots use a variety of sensors for input. Consequently, a robot’s control system that combines self-learning and degrees of autonomy would likely also fit the AIA’s AI definition. However, whether such a system also falls under the AIA’s definition of high-risk AI is a separate matter, discussed below in Section 7.2.

The practical implications of the terminological distinctions between the MR and the AIA are not entirely clear, potentially leading to additional compliance costs due to the use of slightly different terms for analogous concepts. However, for robot manufacturers and third-party entities, adhering to the MR’s more concrete requirements might prove more straightforward. Nonetheless, ambiguities could arise in cases of minimal autonomy since the ʻvarying levels of autonomy’ are not explicitly defined in either the AIA or MR.

In conclusion, the MR’s strategy of eschewing the broad term ʻAI’ in favour of ʻautonomy’ and ʻself-learning’ enables coverage of a wide spectrum of technologies, including those that allow robots to learn and adapt to their environment. This nuanced approach ensures the MR remains technology-neutral, flexible and able to accommodate future developments in robotics and AI technologies.

The subsequent sections first address the specific requirements that the MR sets forth for smart robots and then explore the interplay between these requirements and those outlined in the AIA.

5. Smart Robots and the MR

5.1 Introduction

The MR’s requirements for certain advanced machines are applicable to smart robots and regulate some aspects of HRI. Unlike its predecessor, the MR considers the integration of cobots into human environments. This shift from segregation to integration emphasises the need to rigorously assess and mitigate risks in spaces where humans and robots stay without barriers, or even collaborate or interact. For instance, the force of a cobot’s handshake could unintentionally cause harm, or its unforeseen movement in public spaces might pose a danger to passersby. These scenarios highlight the importance of ensuring robot movements and actions do not compromise human safety, aligning with Asimov’s first law that a robot should not harm humans.

Considering this, the MR specifically mandates risk assessments and the prevention of hazards stemming from physical contact with machinery, particularly those with moving parts. While this stipulation is expected, it is noteworthy that the required assessment is by no means limited to physical health. It also covers the psychological stress caused by robot encounters. The MR’s understanding of health and security is therefore a broad one. The design and physical presence of a robot—for example, its height—might intimidate vulnerable individuals like patients, children, or the elderly. This general requirement to assess HRI risks applies regardless of whether the robot is tele-operated or programmed.

The complexities of these assessments become particularly pronounced when considering AI-powered robots, whose capabilities for self-evolving behaviour and autonomous mobility introduce new dimensions to safety and risk management. For smart robots, new provisions are in place, focusing on self-evolving behaviour, and autonomous mobility. The Essential Health and Safety Requirements (EHSR) mandate the identification and management of specific risks for both categories, combined with additional requirements detailed in subsequent sections.

5.2 Robot Autonomy

Robot autonomy raises significant safety concerns due to the diminished role of direct human control. Defined in EHSR Section 3.1.1, ʻautonomous mobile machinery’ refers to machinery capable of independently ensuring safety functions, diverging from traditional operations that rely on human operators directly involved in running a machine. This definition aligns with the Parliament’s conception of autonomy, emphasising the machine’s independence from human intervention. However, autonomy in the MR’s context is more narrowly defined, concentrating solely on safety functions.

While the MR accommodates autonomous mobile robots, it sets forth certain basic requirements both for their design and supervision. First, robot engineers must ensure that the technical control system running the robot executes the autonomous function safely. In addition, a human supervisory function must exist, serving as a critical yet narrowly focused mechanism for remote monitoring and controlling machinery in autonomous modes. This function allows for essential actions such as stopping, starting, or moving the machinery to a safe state, contingent on the supervisor’s comprehensive understanding of the operational area, whether directly observed or indirectly assessed.

The requirements for robot autonomy and supervision are intentionally broad, necessitating further detail through technical standards. Robot autonomy does not eliminate human involvement, as the MR ensures human participation in both the control system’s design and, to a certain extent, its oversight. However, the MR does not specify the conditions under which human supervision is required nor details the specifics of how such supervision should be implemented. It merely opens the door for supervision, leaving the details of how the supervisory function is staffed and managed to be defined by additional rules (including the AIA), standards, or even user instructions. Therefore, the MR does not provide a comprehensive regulatory framework for autonomous robots; it merely serves as an enabler for such regulation.

This reliance on standards and the mandate for manufacturers to conduct risk assessments before placing a robot on the market or putting it into service are signs of the MR’s meta-regulatory approach. Meta-regulation refers to a regulatory strategy where the regulation is focused not on dictating specific behaviours or outcomes but on governing the processes through which organisations self-regulate. This approach leverages the capacity of organisations to monitor their own compliance with regulatory principles, often through internal governance mechanisms, standards or guidelines. Ideally, meta-regulation encourages organisations to develop their own compliance strategies tailored to their operational contexts while adhering to the broader goals or principles set by the regulatory authority. This aligns with the Parliament’s emphasis on the ʻdo not harm’ principle being directed at designers rather than the robots themselves, thereby tasking engineers with evaluating risks and ensuring machinery safety. The MR also accounts for risks that may emerge post-market due to the robot’s evolving and autonomous behaviour, as well as HRI. As mentioned, it does not, however, prescribe specific technical solutions for meeting the requirements.

In the MR, meta-regulation is nevertheless combined with more specific requirements following the logic of ʻcommand and control’. An example is the requirement for autonomous mobile machinery to either operate within a specified zone or possess obstacle detection capabilities. This is a relatively concrete requirement that can be checked and enforced. However, also this requirement is meta-regulatory, as it does not specify the methods for achieving the requirement. The choice between restricting machinery to a designated zone or equipping it with detection technology is explicitly dependent on the risk assessment. For instance, consider the application of zoning and detection capabilities within a hospital setting, which utilises different types of robots for various tasks. Zoning could be applied to material transport robots, restricting their operation to specific areas like the pharmacy or laboratory wings, where interaction with patients is minimal and staff are trained to interact safely with these robots. This approach ensures that the robots fulfil their designated roles without posing risks in crowded or sensitive areas, adhering to the MR’s safety mandates through spatial limitations.

Conversely, human detection becomes crucial for robots operating in more dynamic environments within the same hospital. Surgical robots or diagnostic robots, although more stationary, might still require the capability to detect human presence to ensure safety during their operation. Similarly, robots designed to navigate public areas of the hospital, such as corridors or waiting rooms, must possess advanced detection capabilities to avoid collisions with patients and visitors. These robots, moving beyond tightly controlled zones, exemplify the necessity of equipping autonomous machinery with the ability to identify and react to humans in real-time, a requirement underscored by the MR to safeguard against the risks associated with human-robot interaction in accessible spaces.

This approach, despite its direct mandates, maintains a meta-regulatory aspect by granting manufacturers the leeway to determine their compliance strategy with safety obligations. By melding these approaches, the MR offers a regulatory framework that is both partly flexible and still stringent.

5.3 AI and Robots with Self-Evolving Behaviour

The MR’s second AI-based category is ʻself-evolving behaviour’ of machines. The notion that systems can have agency is not new, but self-evolving behaviour requires additional measures. While the MR does not explicitly define the mechanisms of this behaviour evolution, it is reasonable to infer that machine learning or other AI technologies could be instrumental in facilitating the self-evolution of behaviour. This is in line with Recital 54 of the Regulation which justifies the stricter rules for machinery with self-evolving behaviour because ʻdata dependency, opacity, autonomy and connectivity’ could considerably increase the probability and severity of harm.

The use of machine learning in robots indeed poses safety challenges. Theoretically, a robot’s AI system could enable it to perform new tasks or move in unforeseen ways, potentially leading to harmful outcomes. This ties into the European Parliament’s expectation for HRI to be characterised by directability and predictability. In the context of Asimov’s Laws, particularly the first law (not to harm humans) and the second law (to obey orders from humans) are relevant to the challenges with self-evolving robot behaviour.

A practical concern is that self-learned behaviours might surpass the boundaries of initial risk assessments, pivotal to the safety certification. As a result, stringent design parameters are mandated by the MR to ensure that the robot does not exceed its predefined ʻtask and movement space’. However, the practical implementation of this requirement raises several questions. Often, the parameters for a robot’s operation might be set, at least partly, by the user, rather than the manufacturer. A robot’s user typically determines the specific tasks the robot should perform and the areas where it can operate. Additionally, a robot’s tasks and movement space are not static; they frequently change based on the user’s commands and intentions. Updates to the robot’s software or alterations to its mechanical configuration, such as adding a new arm or hand, may require the expansion of its operational capabilities. Consequently, the parameters defining allowable tasks and movement spaces need to be adaptable to accommodate such changes. The challenge of addressing these issues falls to future standardisation efforts. While the current provision limiting the robot to its pre-defined task and movement space is somewhat broad, it lays the groundwork for developing detailed mechanisms and protocols to ensure safety compliance.

While ensuring safety makes it crucial to limit what a robot can do, such measures alone may not be sufficient to mitigate all risks. Robots could potentially still unexpectedly execute unintended and potentially harmful actions. Therefore, it is also required to have mechanisms in place to interrupt or correct these actions and retain data about decision-making. This provision is similar to Article 14 AIA, as further elaborated in Section 7.

6. Human-Robot Communication

6.1 Robots Responding to People

In robotics, the role of communication extends beyond the mere exchange of information to encompass vital safety functions. The MR stipulates that smart robots should be equipped to ʻrespond to people adequately and appropriately’, echoing the Parliament’s insistence on predictability within HRI. Communication can take various forms, such as ʻthrough words and non-verbally through gestures, facial expressions or body movement.’ This sets a foundation for more natural and intuitive interactions, but it also implicitly anthropomorphises robots.

This requirement for robot communication is anchored in the MR’s principle of ergonomics. Ergonomics aims to optimise human well-being and overall system performance by designing environments, products, and systems that align with human capabilities and limitations, thereby enhancing safety, efficiency, and even comfort. This broader focus on improving robot functionality for human interaction thus extends beyond mere safety concerns.

6.2 Communication with the Operator

In addition to responding to ʻpeople’, the smart robot should also ʻcommunicate its planned actions (such as what it is going to do and why) to operators in a comprehensible manner’. The distinction between communicating ʻresponses’ to people generally and conveying planned actions specifically to operators is noteworthy, because the nature and depth of communication varies based on the audience’s role in relation to the robot. While general responses might be necessary for broad situational awareness and safety for all nearby humans, detailed communication of planned actions could be essential for operators responsible for overseeing robot operations. It is also likely that operators can avail themselves of communication channels that are not open for third parties, such as a direct interface with a robot’s control system. Yet, depending on the context it might also be necessary to develop various communicative roles that go beyond ʻpeople’ and ʻoperator’, and which could include patient, family member or other social roles.

Moreover, the MR’s focus on operators in the context of autonomous machinery presents an interesting paradox. The concept of ʻoperators’ traditionally implies direct human control or supervision, yet autonomous machinery, by definition, operates with a degree of independence. This raises questions about how ʻoperation’ is conceptualised in scenarios where machines are designed to make decisions without human input. The regulation seems to anticipate a hybrid model, where autonomous machines perform tasks independently, but where human oversight, particularly by designated operators, remains a critical safety and operational consideration.

Such considerations underscore the complexity of integrating autonomous robots into human-centric environments. They highlight the need for regulatory frameworks to evolve in tandem with technological advancements, ensuring clear, context-sensitive guidelines for human-robot communication and interaction. The nature of these questions is arguably too detailed and, at least presently, too technical for legislation, so it makes sense to leave these questions to standardisation and allow for innovative solutions, provided that these solutions adequately manage risk.

7. Integration with the AIA

7.1 Introduction

The interaction between the AIA and the MR is significant, as both regulations deal with aspects of AI, whether integrated into specific machinery or in a broader context. Given that both sets of rules can be applicable simultaneously, it is crucial to examine the criteria that trigger their application and determine if similar issues are handled in comparable manners.

7.2 AIA Applicability Directly or via the MR

The AIA’s applicability hinges upon various criteria that cannot be exhaustively addressed here. However, in brief, AI systems that pose a risk beyond a certain threshold can either be forbidden under Article 5 AIA or regulated as high-risk pursuant to Article 6 AIA. In the context of robotics, it is essential to differentiate between two types of AI systems within a robot.

First, there could be various AI-based software running within a robot’s system which can be directly regulated by the AIA independently of the MR. In principle, a robot can run any software, such as for migration control purposes or workplace software utilising emotion recognition. Depending on the specific use case, the former example may be regulated as high-risk under the AIA, Annex III (7), while the latter type of emotion recognition is prohibited under Article 5(1)(f) AIA. However, in these cases, the focus is on the software in general, regardless of its installation on a robot or any other hardware (eg a laptop). This means that the conformity assessment of the software (ie the AI system or systems) is conducted separately from the conformity assessment of the hardware (robot or laptop).

This direct application of the AIA needs to be distinguished from the indirect applicability via the MR. Specific machinery components regulated under the MR can trigger the applicability of the AIA due to their criticality. Focusing on the robot as a machine regulated under the MR, certain AI-based systems are categorised as especially critical for health and safety under the MR, triggering the AIA. This is explained subsequently, but an example would be a robot’s emergency stop device using machine learning.

The AIA foresees that an AI system intended as a safety component of machinery (and similar products regulated under the MR) can fall under the high-risk AI category. This trigger is conditional on the machinery, or its component, requiring a third-party conformity assessment under the MR. If Article 6(1) AIA is triggered, all requirements for high-risk AI systems in Chapter 2 AIA apply. Therefore, it becomes essential to identify MR requirements for third-party conformity assessment.

The MR stipulates a third-party conformity assessment for the safety components or embedded systems of certain smart robots, namely those exhibiting ʻfully or partially self-evolving behaviour using machine learning approaches ensuring safety functions’. As mentioned, this requirement for a third-party assessment triggers obligations for high-risk AI based on Article 6(1) AIA. Consequently, for robots whose safety-related software is considered high-risk AI, the requirements of both the AIA and MR apply to the respective safety component or embedded system.

Therefore, the AIA could apply differently to these—for example, AI-based emergency stop devices—compared to other high-risk AI software running on the robot, such as an AI-based migration software, regulated under the AIA in its own right. The conformity assessment for the AI-based emergency stop system follows the MR, pursuant to Article 43(3) AIA, with additional requirements for high-risk AI systems applying in addition to the MR’s EHSR.

7.3 Overlapping Human Oversight in MR and AIA

This potential dual applicability leads to the duplication of, at least, requirements regarding human oversight and data recording. Article 14 AIA stipulates that high-risk AI systems must be designed to allow for effective oversight by natural persons throughout their operation. This principle mirrors the MR’s requirement for the supervisability of self-evolving machinery. Although the two rule sets differ in detail, both aim to ensure some level of human oversight. Their effect is that, to manage the various autonomous capabilities of a robot, mechanisms for human intervention are in place to correct or halt the robot’s operations should it stray from safe behaviours.

7.4 Overlapping Recording Requirements

Another aspect of partial alignment between the MR and the AIA involves the recording and retention of decision-making data. The MR requires ʻenabled’ recording of safety-related data, with a mandatory retention period of one year, to ensure machinery compliance with safety standards. In contrast, the AIA mandates that high-risk AI systems automatically record ʻlogs’ of events throughout the system’s lifespan, for very similar purposes. This highlights a discrepancy in focus: the AIA on ʻevents’ versus the MR on ʻdata on the safety-related decision-making process’. Additionally, under Article 19 AIA, logs must be kept for at least six months or longer, depending on the intended purpose of the high-risk AI system, unless other laws specify otherwise. Thus, while both frameworks require recording—albeit of distinct data types—the retention periods differ. This discrepancy is likely to present practical challenges for companies in ensuring compliance with both regulatory frameworks. In addition, if the retained data are personal, the General Data Protection Regulation (GDPR) limits retention to the period necessary for the purposes for which the personal data are processed, which is a more open-ended assessment. This limitation is recognized in the AIA’s data retention requirements, but not in the MR’s.

The primary goal of the data-recording mandate is to ensure compliance with established safety standards. However, these data are also potentially relevant for stakeholders determining accountability in the event of robot-related accidents causing harm or property damage. The necessity for thorough investigations in such scenarios is paramount, making these data crucial, for example in the context of liability for damages caused by AI. Access to these recorded data could, however, pose challenges. The MR specifies that data collection serves the sole purpose of proving machinery compliance upon a justified request from a competent authority. Similarly, the AIA mandates data recording to support specific supervisory measures. However, once collected, other legislation might stipulate the disclosure of such data for additional purposes, potentially complicating their intended exclusive use.

8. Conclusion

In the realm of smart robot interaction, the MR functions as a ʻbasic’ law in several respects: it addresses the crucial necessity of human safety, its provisions are fundamental but not all-encompassing, and it sets a foundation for additional regulations via standards and future laws concerning HRI. The MR marks a foundational step in the EU’s approach to the safety regulation of smart robotics, emphasising the need for such technologies to operate safely within human environments. In addition to safety, it also incorporates other interests such as health and at least the basic functionality of human communication, as required by the ergonomic principle.

While the MR does not explicitly protect fundamental rights concerns, it is complemented by the AIA, which does consider risks to these rights. This discrepancy in protective ambit is arguably less problematic than it might first appear. The MR ensures only the basic safety of the smart robot, while the AIA applies to other, more advanced functions, where fundamental rights might play a role—for instance, if a robot is programmed to carry out policing functions. Thus, to the degree smart robots can be seen as a fundamental rights risk, this can be addressed via the AIA, which is also adaptable to new use cases according to Article 7 AIA.

Despite its contributions to safety, the MR is far from all-encompassing. Smart robot safety is facilitated, but not ensured, because the MR focuses on the placing of products on the market, but their use will raise further questions. For example, both the MR and the AIA require oversight, yet the MR does not regulate when and how it should be carried out. This must be determined in user manuals and in additional rules applicable to the use of smart robotics in specific contexts. The use of robots also raises further issues, beyond safety. As these technologies navigate public spaces and collect data through sensors, establishing robust data protection measures and ensuring public safety becomes paramount. Some questions can be answered by other existing laws, and the MR should be situated within a broader legal and ethical framework. This broader framework encompasses not only the specificities of robot safety but also the implications for privacy, liability, and the ethical dimensions of HRI. For example, the GDPR regulates the processing of personal data, but ensuring privacy-friendly processing of a robot’s sensory data can be challenging. In healthcare, prioritizing patient autonomy and participation in decisions related to robot-assisted treatments is essential. Balancing these considerations—human supervision, privacy, public interaction, and patient involvement—presents a complex challenge in practice.

It is thus tempting to call for the creation of a comprehensive legal framework applicable to the use of smart robotics, and some have made this request. Indeed, new regulatory approaches are surely needed for the safety of smart robot interactions in the medium to long term. The European Parliament’s original vision for robotics and AI envisaged technologies that enhance human well-being, a goal that continues to be relevant for the future of smart robotics regulation. Yet, considering the Collingridge dilemma, a future regulatory framework should be introduced step by step. In light of the recent adoption of both the MR and the AIA, this paper refrains from advocating for a single regulatory framework for smart robot use, at least at this juncture. It is essential first to assess how these frameworks function in practice. Although the MR may not fully suffice for the regulation of smart robotics over time, it offers numerous prospects for technical standardisation and innovation. It is possible that the combination of the MR and the AIA include sufficient incentives and meta-regulation for technical developments and standardisation to ensure that the initial phase of the future of smart robotics is safe and respects fundamental rights. If not, they can be complemented as needed. Questions regarding the use of smart robotics (as opposed to its introduction on the European market) are arguably often too context-specific to regulate at a general level, and if there is a need for further legislation, it can be introduced after an initial period of learning and creativity, both at national level and in specific contexts.

In conclusion, the MR’s establishment of safety requirements for smart robots represents just the initial step in a complex regulatory journey. Together with the AIA and other parts of the legal framework, this is hopefully a sufficient basis for an initial phase of development. Future legislation must build upon this foundation, ensuring that as robotics and AI technologies evolve, they do so in a manner that upholds the dignity, autonomy and self-determination of individuals, particularly in sensitive areas such as healthcare.

Funding and Acknowledgments

Parts of this research are supported by the ʻVulnerability in the Robot Society’ (VIROS) project, funded by the Norwegian Research Council (project number 247947). Thanks are due to Mona Naomi Lintvedt, Lee Bygrave and Rose Margrethe Østmo Monrad for valuable comments to a previous version of this work. This paper was also presented at the workshop ʻAI Robotics in Healthcare: A Challenge for Law and Tech?’ at NOVA Law School in Lisbon, Portugal, on 17 April 2023 and the author wishes to thank the participants for valuable comments. Finally, I thank the reviewer for insightful and very helpful comments. Any remaining errors are the sole responsibility of the author.

  • 1
    Isaac Asimov, ʻRunaround’ in I, robot, vol S1282 (New American Library 1956). The short story also addresses robot safety and is discussed in the safety literature: see eg Sami Haddadin, Towards Safe Robots: Approaching Asimov’s 1st Law (Springer Nature 2013).
  • 2
    Discussions of Asimov’s Laws have been influential in the literature on robot ethics, and some have suggested alternatives: see RR Murphy and DD Woods, ʻBeyond Asimov: The Three Laws of Responsible Robotics’ (2009) 24(4) IEEE intelligent systems 14. For a general ethical critique, see Susan Leigh Anderson, ʻAsimov’s “Three Laws of Robotics” and Machine Metaethics’ (2008) 22 AI & Society 477 <https://doi.org/10.1007/s00146-007-0094-5>.
  • 3
    Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC [2023] OJ L 165/1.
  • 4
    Ronald Leenes and others, ʻRegulatory Challenges of Robotics: Some Guidelines for Addressing Legal and Ethical Issues’ (2017) 9(1) Law, Innovation and Technology 1 <https://doi.org/10.1080/17579961.2017.1304921>; Yueh-Hsuan Weng, Chien-Hsun Chen and Chuen-Tsai Sun, ʻToward the Human–Robot Co-Existence Society: On Safety Intelligence for Next Generation Robots’ (2009) 1(4) International Journal of Social Robotics 267 <https://doi.org/10.1007/s12369-009-0019-1>; Alberto Martinetti and others, ʻRedefining Safety in Light of Human-Robot Interaction: A Critical Review of Current Standards and Regulations’ (2021) 3 Frontiers in Chemical Engineering <https://doi.org/10.3389/fceng.2021.666237>; Michael Guihot, Anne F Matthew and Nicolas P Suzor, ʻNudging Robots: Innovative Solutions to Regulate Artificial Intelligence’ (2017) 20(2) Vanderbilt Journal of Entertainment & Technology Law 385.
  • 5
    European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
  • 6
    ibid General Principle 1.
  • 7
    Riccardo Vecellio Segate and Angela Daly, ʻEncoding the Enforcement of Safety Standards into Smart Robots to Harness Their Computing Sophistication and Collaborative Potential: A Legal Risk Assessment for European Union Policymakers’ (2023) European Journal of Risk Regulation 1, 8 <https://doi.org/10.1017/err.2023.72>.
  • 8
    Article 54 MR.
  • 9
    European Parliament Resolution (n 5) 31-35.
  • 10
    Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 1.
  • 11
    Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC [2006] OJ L 157/24.
  • 12
    The Directive is often discussed briefly in legal literature on robotics: see eg Martin Ebers, ʻRegulating AI and Robotics: Ethical and Legal Challenges’ in Martin Ebers and Susana Navas (eds), Algorithms and Law (Cambridge University Press 2020) 54-55; Vecellio Segate and Daly (n 7) 25-26. However, a comprehensive report on legal aspects of robotics completely disregards the Directive: see Monika Simmler and Olivia Zingg, ʻRechtliche Aspekte Sozialer Roboter’ (2021) <www.alexandria.unisg.ch/server/api/core/bitstreams/b3e5bba5-425a-4d9a-941f-3c933ab80d99/content>. Nonetheless, the official EU guide does address some of the practical questions regarding the Machinery Directive: see ʻGuide to Application of the Machinery Directive 2006/42/EC | Safety and Health at Work EU-OSHA’ (European Union 2019) <https://osha.europa.eu/en/legislation/guidelines/guide-application-machinery-directive-200642ec>.
  • 13
    Some of the introductory literature to the MD also addresses the legal context: see eg Torben Jespen, Risk Assessments and Safe Machinery (Springer 2016).
  • 14
    The latest overview lists 1315 harmonised standards under the MD (as per 1 February 2024): see European Commission, ʻMachinery (MD)’ <https://single-market-economy.ec.europa.eu/single-market/european-standards/harmonised-standards/machinery-md_en>.
  • 15
    Technical standards can be seen as a type of soft law, according to Hans-W Micklitz, ʻSoft Law, Technical Standards and European Private Law’ in Mariolina Eliantonio, Emilia Korkea-aho and Ulrika Mörth (eds), Research Handbook on Soft Law (Edward Elgar Publishing 2023) 145–161.
  • 16
    The EU Court of Justice has recently decided that Harmonised Standards ʻform part of EU law owing to their legal effects’: see Case C-588/21 P, PublicResourceOrg, Inc and Right to Know CLG v European Commission, judgment of 5 March 2024 (Grand Chamber) (ECLI:EU:C:2024:201) para 89. The decision focuses on the disclosure of Harmonised Standards by the European Commission. Such disclosure may be justified due to ʻthe existence of an overriding public interest, within the meaning of the last clause of Article 4(2) of Regulation No 1049/2001, arising from the principles of the rule of law, transparency, openness and good governance.’
  • 17
    See Vecellio Segate and Daly (n 7) 26 (ʻEither way, the Commission declared that it might accommodate amendments related to the Internet of Things (IoT) and smart robotics—though the contents of such amendments as well as the Commission’s policy approach in integrating them remain undefined—and they definitely do not feature in the New Machinery Regulation’). It is unclear what this assertion is based on, as the otherwise excellent article primarily addresses the MD, rather than the MR. This focus is understandable, given that it was submitted for review before the MR was adopted.
  • 18
    See eg Nicole Berx, Wilm Decré and Liliane Pintelon, ʻExamining the Role of Safety in the Low Adoption Rate of Collaborative Robots’ (2022) 106 Procedia CIRP 51.
  • 19
    While the original Proposal for a Regulation on Machinery Products (COM/2021/202 final) mentioned cobots in recital 11, the final version omits this term in the corresponding recital 12.
  • 20
    The exclusion of these is explained in MR Recitals 17 and 18. For a full list of exclusions, see Art 2(2) MR.
  • 21
    Murphy and Woods (n 2).
  • 22
    The third law, which states a robot must protect its existence unless it conflicts with the first two laws, extends beyond human-robot interaction to prioritise robot preservation. This focus is arguably less pertinent in contemporary HRI discussions, except perhaps in discussions about autonomous weapons.
  • 23
    European Parliament Resolution (n 5).
  • 24
    Sheila Jasanoff, ʻFuture Imperfect: Science, Technology, and the Imaginations of Modernity’ in Sheila Jasanoff and Sang-Hyun Kim (eds), Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power (University of Chicago Press 2015) ch 1.
  • 25
    European Parliament Resolution (n 5) A.
  • 26
    ibid T.
  • 27
    ibid 50.
  • 28
    ibid AA.
  • 29
    ibid AB.
  • 30
    ibid 51. The idea of regulating various types of robotics in a single law is critiqued by Melinda Florina Lohmann, ʻEin europäisches Roboterrecht – überfällig oder überflüssig?’ (2017) 50(6) Zeitschrift für Rechtspolitik 168, 171.
  • 31
    European Commission, Directorate-General for Justice and Consumers, Expert Group on Liability and New Technologies – New Technologies Formation, ʻLiability for Artificial Intelligence and Other Emerging Digital Technologies’ (European Union 2019) <https://data.europa.eu/doi/10.2838/573689>; Nathalie A Smuha, ʻThe EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence’ (2019) 20 Computer Law Review International 97 <https://doi.org/10.9785/cri-2019-200402>.
  • 32
    MR Recital 45; Council Resolution of 7 May 1985 on a new approach to technical harmonization and standards [1985] OJ C 136/1.
  • 33
    MR Recital 26.
  • 34
    MR Recital 44.
  • 35
    MR Recital 61.
  • 36
    MR Recital 12.
  • 37
    Mathias Karlsen Hauglid and Tobias Mahler, ʻDoctor Chatbot: The EU’s Regulatory Prescription for Generative Medical AI’ (2023) 10(1) Oslo Law Review 1 <https://doi.org/10.18261/olr.10.1.1>.
  • 38
    See eg Sofia Almpani and others, ʻExosCE: A Legal-Based Computational System for Compliance with Exoskeletons’ CE Marking’ (2020) 11(1) Paladyn, Journal of Behavioral Robotics 414. The authors attempted to formalise the rules for the conformity assessment procedures for exoskeletons based on the MD and the Medical Devices Regulation.
  • 39
    See eg MR Annex VII.
  • 40
    Article 3(1)(a)MR).
  • 41
    This observation highlights that the MR effectively delineates its own purview without necessitating a distinct ʻrobot’ definition, aligning with Ebers’ (n 12) scepticism regarding the necessity for such a definition.
  • 42
    See Hannah Ruschemeier, ʻAI as a Challenge for Legal Regulation – the Scope of Application of the Artificial Intelligence Act Proposal’ (2023) 23 ERA Forum 361.
  • 43
    MR Annex III Part B EHSR 1.2.1 para 2.
  • 44
    This can be based on the concept of machine autonomy, understood as ʻthe ability of a computer to follow a complex algorithm in response to environmental inputs, independently of real-time human input’: see Paul Formosa, ʻRobot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy’ (2021) 31(4) Minds and Machines 595, 599 <https://doi.org/10.1007/s11023-021-09579-2>; Amitai Etzioni and Oren Etzioni, ʻAI Assisted Ethics’ (2016) 18(2) Ethics and Information Technology 149 <https://doi.org/10.1007/s10676-016-9400-6>.
  • 45
    For a critique, see Erik Stayton and Jack Stilgoe, ʻIt’s Time to Rethink Levels of Automation for Self-Driving Vehicles’ (2020) 39(3) IEEE Technology and Society Magazine 13 <https://doi.org/10.1109/MTS.2020.3012315>.
  • 46
    Angeliki Zacharaki and others, ʻSafety Bounds in Human Robot Interaction: A Survey’ (2020) 127 Safety Science 104667 1, 3 <https://doi.org/10.1016/j.ssci.2020.104667>.
  • 47
    For a technical example, see Adel Baselizadeh, Diana Saplacan and Jim Torresen, ʻAdaptive Real-Time Learning-Based Neuro-Fuzzy Control of Robot Manipulators’, 2021 9th International Conference on Control, Mechatronics and Automation (ICCMA) (IEEE 2021).
  • 48
    Art 3(1) AIA.
  • 49
    This combination is foreseen in MR Annex III Part B EHSR 1.2.1.
  • 50
    Zacharaki and others (n 44) 1, 9.
  • 51
    MR Annex III Part B EHSR 1.3.7, para 4.
  • 52
    MR Annex III Part B EHSR 1.1.6 (g) (ʻUnder the intended conditions of use, the discomfort, fatigue and physical and psychological stress faced by the operator shall be eliminated or reduced to the minimum possible, taking into account at least, the following ergonomic principles: […].’).
  • 53
    MR Annex III Part B General Principle 1 para 2 (e).
  • 54
    MR Annex III Part B EHSR 3.3, last paragraph.
  • 55
    MR Annex III Part B EHSR 3.2.4.
  • 56
    MR Annex III Part B General Principle 1 para 1 (ʻThe manufacturer of machinery or a related product shall ensure that a risk assessment is carried out in order to determine the essential health and safety requirements which apply to the machinery or related product. The machinery or related product shall then be designed and constructed to eliminate hazards or, if that is not possible, to minimise all relevant risks, taking into account the results of the risk assessment’).
  • 57
    Sharon Gilad, ʻIt Runs in the Family: Meta-Regulation and Its Siblings’ (2010) 4(4) Regulation & Governance 485 <https://doi.org/10.1111/j.1748-5991.2010.01090.x>.
  • 58
    MR Annex III Part B EHSR 3.3.3.
  • 59
    Pär J Ågerfalk, ʻArtificial Intelligence as Digital Agency’ (2020) 29(1) European Journal of Information Systems 1, 3 <https://doi.org/10.1080/0960085X.2020.1721947>; Ole Hanseth and Eric Monteiro, ʻInscribing Behaviour in Information Infrastructure Standards’ (1997) 7(4) Accounting, Management and Information Technologies 183.
  • 60
    See the mention of machine learning in MR Annex I Part A Section 6. For a technical perspective, see Francesco Semeraro, Alexander Griffiths and Angelo Cangelosi, ʻHuman–Robot Collaboration and Machine Learning: A Systematic Review of Recent Research’ (2023) 79 Robotics and Computer-Integrated Manufacturing 102432 <https://doi.org/10.1016/j.rcim.2022.102432>.
  • 61
    European Parliament Resolution (n 5) 50.
  • 62
    MR Annex III Part B EHSR 1.2.1, para 2.
  • 63
    MR Annex III Part B EHSR 1.2.1 para 3.
  • 64
    MR Annex III Part B EHSR 1.1.6 (g) (ʻwhere relevant, adapting machinery or a related product with intended fully or partially self-evolving behaviour or logic that is designed to operate with varying levels of autonomy to respond to people adequately and appropriately (such as verbally through words and non-verbally through gestures, facial expressions or body movement) and to communicate its planned actions (such as what it is going to do and why) to operators in a comprehensible manner’) (emphasis added).
  • 65
    ibid.
  • 66
    ʻAnthropomorphism is a phenomenon that describes the human tendency to see human-like shapes in the environment’: see Jakub Złotowski and others, ʻAnthropomorphism: Opportunities and Challenges in Human–Robot Interaction’ (2015) 7(3) International Journal of Social Robotics 347, 347 <https://doi.org/10.1007/s12369-014-0267-6>.
  • 67
    MR Annex III Part B EHSR 1.1.6.
  • 68
    On human oversight see further below, Section 7.3.
  • 69
    In addition, the AIA also regulates general-purpose AI systems and introduces transparency obligations for certain other AI systems.
  • 70
    This is based on a combination of Article 6(1) AIA and Annex II Part I A.1 AIA, pointing to the old Machinery Directive, which logically must now be read as a reference to the MR.
  • 71
    Article 25(2) and (3) MR, referring to Annex I Part A (5 and 6).
  • 72
    Chapter 3 AIA.
  • 73
    MR Annex III Part B EHSR section 1.2.1 para 2 c.
  • 74
    MR Annex III Part B EHSR section 1.2.1 para 2 b.
  • 75
    Article 12 AIA.
  • 76
    Article 5(1)(e) of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L 119/1.
  • 77
    Naomi Lintvedt, ʻThermal Imaging in Robotics as a Privacy-Enhancing or Privacy-Invasive Measure? Misconceptions of Privacy When Using Thermal Cameras in Robots’ (2023) 2 Digital Society 33 <https://doi.org/10.1007/s44206-023-00060-4>.
  • 78
    Such a new framework was proposed in 2023 by Vecellio Segate and Daly (n 7) 31: ʻThe time has come for a fully-fledged Regulation conceived for the challenges of smart cobotics’.
  • 79
    European Parliament Resolution (n 5) Recital O.
  • 80
    The ʻCollingridge dilemma’ is a methodological quandary where efforts to control technology development face a double bind: predicting impacts is difficult until the technology is widely used, but by then, control becomes challenging. See David Collingridge, The Social Control of Technology (St Martin’s Press 1980).
Copyright © 2024 Author(s)

This is an open access article distributed under the terms of the Creative Commons CC-BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).