1. Introduction

In general terms, the mantra ‘security by design’ (SbD) exhorts software engineers and others involved in building information systems (IS) architecture to think about the security needs for that architecture before it is built and embed those needs in the architecture’s subsequent design and construction. This mantra is often coupled with another, ‘security by default’, which emphasises that the settings of IS architecture should be initially configured to optimise security. Logically, the latter mantra and its implementation are, or ought to be, components of SbD. Accordingly, this article proceeds on the basis that SbD implicitly, if not explicitly, encompasses ‘security by default’.

As elaborated herein, SbD initially emerged from computer engineering principles. It was later pitched as a public policy ideal, partly to ameliorate the security vulnerabilities presented by the ‘Internet of Things’ (IoT). It has since found expression in statutory instruments covering cybersecurity more generally. Together with related concepts, such as ‘data protection by design and by default’ (DPbDD) and ‘cyber hygiene’, the SbD mantra forms part of a recently emergent nomenclature in the cybersecurity rhetoric of both soft and hard law instruments. It also adds to an expanding ‘by design’ discourse focused on integrating various values into technology production processes. In regulatory theory, SbD—similarly to DPbDD and many other ‘by design’ mantras—may be classified as an instance of ‘design-based regulation’ in the sense employed by Yeung. That is, it is concerned with the ‘embedding of standards into design at the standard-setting stage in order to foster social outcomes deemed desirable’.

However, the precise meaning and normative implications of the SbD mantra remain nebulous, particularly in a legal context. While a substantial body of legislation, recommendations, guidelines and technical standards seeking to promote SbD or SbD-related ideals now exist, authoritative guidance on how SbD should be construed as a legal requirement is scant. This is problematic, as it could engender arbitrarily divergent operationalisation of SbD as a hard law norm.

Critical legal scholarship on the semantics and regulatory significance of SbD as such is almost non-existent, although there is fairly comprehensive academic legal literature indirectly analysing SbD and its closely related norms and ideals. The scant academic treatment of SbD is surprising given the huge amount of scholarship on data protection, cybersecurity and internet governance, along with the rapid growth of ‘by design’ discourse. This paucity of attention might be partly due to an attitude amongst legal scholars that SbD is a relatively mundane, uninteresting matter. Yet, it also arguably reflects the absence of the mantra’s express manifestation in a bespoke, high profile legislative provision akin to Article 25 of the General Data Protection Regulation (GDPR). The latter has undoubtedly played a key role in bringing DPbDD into the limelight. As shown further on in this article, SbD is, in part, baked into Article 25 GDPR. Hence, parsing the latter ought to bring elements of SbD into greater focus.

This article aims to correct the deficit in scholarship on SbD. The chief research questions addressed herein are as follows: (i) What are the origins and catalysts of SbD? (ii) What does it denote and what are its goals? (iii) How is it legally operationalised? (iv) How ought it to be legally operationalised? and (v) What may hinder its desired operationalisation? Exploration of these questions is carried out using the more established discourses on DPbDD and ‘privacy by design’ (PbD) as foils or comparative benchmarks.

The questions concerned with legal operationalisation are approached focusing mainly on relevant developments in European Union (EU) law, albeit with some lines of comparison drawn to law in other jurisdictions. In this regard, the article is predominantly concerned with parsing and analysing the key constituents of SbD as a legal norm rather than attempting to provide answers on all aspects of its legal operationalisation. For example, issues of responsibility and liability for legal operationalisation (or non-operationalisation) are treated superficially.

The core arguments advanced in the article are, first, that SbD fundamentally aims to ensure security requirements ‘in the books’ gain practical purchase; as such, it is a valuable addition to cybersecurity law and policy. Second, in respect of EU law, SbD has gone from being merely a technical engineering standard to becoming entrenched as a hard law norm and, in relation to the processing of personal data, a regulatory principle inhering not just in secondary legislation but also in the EU’s constitutional fabric. Third, realising SbD ideals will be far from straightforward and is likely to be hindered, at least in the short term, by poor communication of what these ideals mean and by particular characteristics of current computer engineering culture. Fourth, a more serious, long-term hindrance could arise if SbD’s legitimacy is undermined as a result of its use to further authoritarian or corporate interests at the expense of civil liberties or consumer protection.

Apart from attempting to elucidate the roots, semantics and normative dimensions of SbD, this article is written with a view to distilling lessons for legislators charged with drafting statutory rules embracing the SbD mantra and for other regulatory authorities charged with overseeing the rules’ implementation. However, the article is also written for other relevant stakeholders, of which there are many. These include producers of computer software and hardware, industry standards-setting bodies, government procurement agencies, consumer protection groups, other purchasers and end users of information systems and, indeed, any person or entity providing a potential vector for attacks on cybersecurity. Taken as a whole, these stakeholders have widely varying degrees of consciousness, competence and interest in security planning. This makes the task of getting the necessary traction for SbD ideals all the more challenging. It also makes elucidating the semantics and rationale of SbD as a nascent regulatory principle even more important. A lack of clarity in this respect will likely undermine the traction of SbD in the mindsets of many of these stakeholders.

Although this article is primarily concerned with cybersecurity norms, much of its parsing of the SbD mantra—particularly the ‘by design’ constituent—is also relevant for elucidating aspects of other design-focused concepts. As shown in Section 4.3, the use of ‘by design’ norms is ballooning across a wide range of contexts. In many of these cases, the concept of design remains largely unexplained. The unpacking of the design concept in Section 5.5 may provide useful guidance for interpreting it in parallel discourses.

This article proceeds from two main points of departure that are partly factual, partly theoretical. The first is that the central policy problem SbD is aimed at countering—deficient IS security—presents intractable challenges. It is an inherently ‘wicked problem’ in the sense used by Rittel and Webber in their famous attack on the hubris of societal planning. As part of that attack, they distinguished between societal problems that require government planning and problems in the natural sciences. In contrast to the latter problems, ‘which are definable and separable and may have solutions that are findable’, Rittel and Webber argued as follows:

[T]he problems of governmental planning—and especially those of social or policy planning—are ill-defined; and they rely upon elusive political judgment for resolution. (Not ‘solution’. Social problems are never solved. At best they are only re-solved—over and over again.).

They further warned about the inherent difficulty of:

identifying the actions that might effectively narrow the gap between what-is and what-ought-to-be. As we seek to improve the effectiveness of actions in pursuit of valued outcomes, as system boundaries get stretched, and as we become more sophisticated about the complex workings of open societal systems, it becomes ever more difficult to make the planning idea operational.

A consequence of recognising the ‘wicked’ character of the cybersecurity ‘problem’ is that SbD—and regulatory support for SbD—cannot be expected to deliver perfect security; at best, one can expect delivery of an iterative ‘satisficing’ level of security (to draw on Simon’s terminology). I return to this viewpoint in Sections 5.7 and 6.1.

The second point of departure is that legislative support for SbD ideals is necessary. Economic incentives to ‘hardwire’, as it were, security into IS development are currently weak. Hence, the market alone cannot be realistically expected to deliver the requisite support for cybersecurity standards; regulatory intervention is required. Support for this viewpoint is provided in Section 4.2.

2. Evolution of Security by Design from Engineering Method to Public Policy Ideal

2.1 Engineering Method

The centrality of design for achieving adequate security of computer systems has long been recognised. In 1970, an influential report by the US Defense Science Board Task Force on Computer Security stated that ‘[p]roviding satisfactory security controls in a computer system is in itself a system design problem’. Somewhat ominously, it also stated that ‘[d]esigners of secure systems are still on the steep part of the learning curve and much insight and operational experience with such systems is needed’. Over half a century later, both observations still ring true, as Section 4.2 indirectly indicates. Nonetheless, significant progress has been made since the 1970s in the conception, development and refinement of security engineering methods, partly under the flag of ‘security by design’.

The SbD concept originates partly in work by computer scientists who, back in the 1970s, propounded generic design principles for ensuring the security of computer systems. These principles did not expressly reference SbD, but they reflected the aforementioned view of the US Defense Science Board Task Force on Computer Security that the provision of adequate computer security depends on proper design. In subsequent years, various sets of security design principles were formulated, although not always consistently, without using the SbD mantra.

Work on design-focused engineering methods supplemented the successive elaboration of design principles. For present purposes, a particularly noteworthy line of engineering research has been directed at establishing patterns of design to eliminate the accidental insertion of security flaws into software or to minimise the adverse effects of such flaws. This work was first referred to as identifying ‘secure design patterns’. It was subsequently called a ‘secure by design’ approach to software development. Santos and others have summarised this approach as follows:

Secure by design is an approach to developing secure software systems from the ground up. In such approach [sic], the alternate security tactics are first thought; among them, the best are selected and enforced by the architecture design, and then used as guiding principles for developers. … [D]esign flaws in the architecture of a software system mean that successful attacks could result in enormous consequences. Therefore, secure by design shifts the main focus of software assurance from finding security bugs to identifying architectural flaws in the design.

Aspects of this approach were later incorporated into technical–industrial codes of practice, particularly with respect to the IoT. An early example is ‘IoT Security Guideline v1.2’, published in November 2017 by the Internet of Things Alliance Australia (IoTAA). The guideline expressly ‘promotes a “security by design” approach to IoT’, and sets out a comprehensive ‘IoT Trust Framework’.

A similar example is the set of IoT-related ‘Baseline Security Recommendations’ published at around the same time by the then EU Agency for Network and Information Security (ENISA)—now EU Agency for Cybersecurity. The recommendations embraced ‘security by design’ as a ‘security good practice’—along with several other such practices, including ‘privacy by design’—and delineated it as seven commands:

GP-PS-01: Consider the security of the whole IoT system from a consistent and holistic approach during its whole lifecycle across all levels of device/application design and development, integrating security throughout the development, manufacture, and deployment.

GP-PS-02: Ensure the ability to integrate different security policies and techniques.

GP-PS-03: Security must consider the risk posed to human safety.

GP-PS-04: Designing for power conservation should not compromise security.

GP-PS-05: Design architecture by compartments to encapsulate elements in case of attacks.

GP-PS-06: For IoT hardware manufacturers and IoT software developers it is necessary to implement test plans to verify whether the product performs as it is expected. Penetration tests help to identify malformed input handling, authentication bypass attempts and overall security posture.

GP-PS-07: For IoT software developers it is important to conduct code review during implementation as it helps to reduce bugs in a final version of a product.

2.2 Public Policy Ideal in Europe

Prior to the publication of the IoT-related codes cited above, the European Commission elevated SbD to a public policy ideal in its Cyber Security Strategy of the European Union, published in 2013. As part of this strategy, the Commission invited stakeholders to ‘[s]timulate the development and adoption of industry-led security standards, technical norms and security-by-design and privacy-by-design principles by ICT [information and communication technology] product manufacturers and service providers, including cloud providers’.

In the next iteration of its strategy, published in September 2017, the Commission called for ‘[t]he use of “security by design” methods in low-cost, digital, interconnected mass consumer devices which make up the Internet of Things’. This in turn was pitched as a precondition for achieving greater ‘cyber resilience’, also in respect of critical infrastructure and essential services. A few weeks later, the European Parliament issued a resolution along similar lines, urging the Commission and Member States ‘to promote the security by design approach’ and urging ‘industry to include security by design solutions in all [IoT] … devices’.

None of these policy pronouncements elaborated on what SbD precisely involves or how it is to be legally operationalised, apart from linking it to a general ‘duty of care’ principle and to the implementation of the EU Network and Information Systems Security Directive 2016 (NISD). Much the same can be said about the Commission’s latest Cybersecurity Strategy, which also flags SbD as a measure for enhancing ‘resilience’. Nonetheless, the Commission has implicitly viewed its overhaul of the NISD as (at least) one step in strengthening the mantra’s legal traction. Elements of this reform process are presented in Section 3.5 of this article. The EU Council of Ministers has also treated SbD as an essential element of ‘Europe’s overall cyber resilience’ and implicitly supported efforts to anchor it in ongoing reform of EU legislation, particularly with regard to cybersecurity risks of connected devices.

Consumer organisations and national governments have played an important role in adding flesh to SbD as a public policy ideal. Six months after the European Commission issued its 2017 Cybersecurity Strategy, ANEC (the European Association for Co-Ordination of Consumer Representation in Standardisation) and BEUC (the European Consumer Organisation) published a position paper calling for comprehensive reform of EU regulatory policy on IoT-connected devices used by consumers, with increased emphasis on requiring ‘security by design and by default’, which they defined as follows:

Security by design means that all connected products and services should better incorporate state of the art cybersecurity functionalities at an early stage of their design process and before the products are put on the market. Security by default means that the settings of a connected device and service are secure as a basic setting (e.g. only high-security measures for authentication such as complex and long passwords should be allowed for ID authentication).

Shortly afterwards, the UK Government published a report focusing on the need for companies to ‘design products and services with security in mind, from product development through to the entire product lifecycle’ as a pertinent way of meeting the consumer protection challenges posed by IoT. The report set forth a ‘soft law’ proposal in the form of a draft Code of Practice for Security in Consumer IoT Products and Associated Services, which the UK Government hoped would suffice to motivate industry to embrace SbD. While noting its preference for a ‘market’ solution, the Government warned that if industry did not conform ‘quickly’ to the code of practice, it would ‘look to make these guidelines compulsory through law’. The code of practice was formally adopted in October 2018. Despite this initiative, the UK Government decided in April 2021 to act on its threat to enact legislation aimed at ensuring better security for connected consumer products. At the time of finalising this article, a legislative bill had yet to be published.

The UK Government’s 2018 report proved influential in shaping cybersecurity policy initiatives in other European countries. For instance, Norway’s ICT Security Committee (‘IKT-sikkerhetsutvalget’), which was appointed to review, inter alia, Norwegian cybersecurity safeguards, recommended in December 2018 that responsibility for the security of connected devices and services should, to a greater extent, be transferred from consumers to the producers and distributors of such devices/services. With reference to the UK Government’s report, it flagged ‘security by design’ (or ‘innebygd sikkerhet’ (built-in security)) as a means of effecting this transfer. However, it did not provide much detail as to what this actually entails.

2.3 The Work of the OECD

This presentation so far might give the impression that SbD, as a public policy ideal, is a recent European invention. However, it is not a recent European invention, nor were EU institutions the first to express SbD-related ideals as transnational public policy. Almost three decades ago, the Organisation for Economic Co-operation and Development (OECD) issued Guidelines for the Security of Information Systems in which these ideals were indirectly flagged, albeit under the banner of an ‘integration principle’ which stipulated that measures for IS security ‘should be co-ordinated and integrated with each other and with other measures … so as to create a coherent system of security’. The Explanatory Memorandum to the Guidelines briefly elaborated on this principle by noting that ‘[s]ecurity of information systems is best considered when the system is being designed’.

In the revision of the Guidelines a decade later, the ‘integration principle’ was reformulated as ‘security design and implementation’, and delineated as follows:

Systems, networks and policies need to be properly designed, implemented and co-ordinated to optimise security. A major, but not exclusive focus of this effort is the design and adoption of appropriate safeguards and solutions to avoid or limit potential harm from identified threats and vulnerabilities. Both technical and non-technical safeguards and solutions are required and should be proportionate to the value of the information on the organisation’s systems and networks. Security should be a fundamental element of all products, services, systems and networks, and an integral part of system design and architecture. For end users, security design and implementation consists largely of selecting and configuring products and services for their system.

In effect, this formulation elevated SbD to a soft law principle in its own right, even if the text omitted the ‘by’ of the SbD mantra.

Other OECD instruments have since replaced the 2002 Guidelines, and while their thrust is broadly in line with the latter and pays some heed to the importance of design, they significantly reduce the salience of SbD as a principle in itself. This reflects a shift in the OECD’s focus from the ‘security of information systems and networks’ to the security risks for economic and social activities relying on such systems and networks. It also marks the OECD’s greater orientation towards encouraging holistic risk management and ensuring that security measures do not constitute a disproportionate interference with those actors affected by them. Unsurprisingly, the OECD’s work is rarely referenced in the more recent European policy initiatives that expressly flag SbD.

2.4 Beyond Europe and the OECD

Besides the OECD, other actors beyond Europe have also pushed public policy ideals similar to SbD. For instance, the US Federal Trade Commission (FTC) adopted in 2015 a set of ten ‘Start with Security’ principles for businesses that parallel the broad thrust of SbD but without expressly mentioning the mantra. Arguably, however, support for SbD ideals may be found in FTC policy documents dating from well over a decade ago.

Another example comes from Latin America. In 2019, the Ibero-American Data Protection Network (Red Iberoamericana de Protección de Datos; RIPD) recommended ‘security by design and default’ as part of efforts to institute both ‘privacy by design’ and ‘ethics by design’ when developing artificial intelligence (AI).

A further example is the United Nations (UN) Committee on the Rights of the Child. According to the Committee, respect for the human rights that children enjoy under the UN Convention on the Rights of the Child requires States to ensure, inter alia, ‘a high standard of cybersecurity, privacy-by-design and safety-by-design in the digital services and products that children use’.

3. The Emergence of Security by Design as a Legal Norm

3.1 EU Legal Norms on Security by Design: An Overview

Having described the emergence of SbD from engineering method to public policy ideal, I continue to its legal status. The numerous references to SbD in the strategy documents outlined above tend to skip over its legal dimensions. Can one say that SbD is more than a publicly lauded engineering method or set of technical standards and also a hard law requirement? In the following, I argue in the affirmative, at least in respect of EU law and certain other jurisdictions.

In EU law, the manifestation of SbD and SbD-related ideals is springing up across multiple legal instruments and at various rungs of the legal hierarchy, although not always in the form of express references to SbD. As shown below, SbD-related norms initially emerged in EU secondary legislation on data protection and then spread to other EU secondary legislation. They have also put down roots in the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR) and the Charter of Fundamental Rights of the European Union (CFREU).

EU data protection legislation has long required applying, in effect, an SbD approach to the development and deployment of information systems that process personal data. The GDPR’s predecessor—the 1995 Data Protection Directive (DPD)—expressed such a stipulation both in its preamble and operative provisions. Under current law, Article 32(1) GDPR requires controllers and processors of personal data to ‘implement appropriate technical and organisational measures’ to ensure that the data are kept secure. This requirement operationalises the core data protection principles of ‘integrity and confidentiality’ (see Article 5(1)(f) GDPR), set out below) and ‘accountability’ (see Article 5(2) GDPR; see also Article 24(1) GDPR) with which controllers (and, indirectly, processors) must conform. While the requirement eschews express mention of ‘design’, a design element lies implicit in it, as elaborated further in Section 3.3 of this article.

The GDPR’s provisions dealing with ‘data protection by design and by default’ in Article 25 reinforce the design element and make it explicit. In summary, Article 25 imposes a qualified duty on controllers (and, indirectly, processors) of personal data to ‘implement appropriate technical and organisational measures … which are designed to implement data-protection principles … in an effective manner’ so that the processing of the data will meet the regulation’s requirements and otherwise ensure protection of the data subject’s rights (Article 25(1)). As made subsequently clear in Section 5.5, the reference to ‘designed’ in Article 25(1) should be construed as denoting not simply ‘intended’ but also engineered and conceptualised. Although diffuse, Article 25(1) undoubtedly embraces a security remit, particularly given that one of the regulation’s core principles is—as noted previously—that of ‘integrity and confidentiality’. The principle states that personal data shall be ‘processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures’ (‘integrity and confidentiality’) (Article 5(1)(f)). A security remit is also clearly present in the ‘data protection by default’ requirements of Article 25(2) GDPR. To a large extent, those requirements are essentially concerned with keeping personal data ‘lean and locked up’, as it were.

In respect of information systems that do not necessarily process personal data, both the EU Network and Information Systems Security Directive 2016 (NISD) and the EU Cybersecurity Act 2019 (CA) manifest SbD ideals. The NISD requires EU Member States to ensure that ‘operators of essential services’ and ‘digital service providers’ implement ‘appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems which they use’ in their operations or services (Articles 14(1) and 16(1)). This is basically the same formulation as used in Article 32 GDPR but applies regardless of whether personal data are being processed. Its SbD dimensions are examined in greater detail in Section 3.5.

The Cybersecurity Act has two main objectives: (i) establishing a long-term organisational framework and mandate for the EU Agency for Cybersecurity (ENISA) and (ii) establishing a pan-European cybersecurity certification framework. Especially pertinent for present purposes is the Act’s explicit endorsement of SbD. In this respect, it initially notes (in recital 2): ‘While an increasing number of devices is connected to the internet, security and resilience are not sufficiently built in by design, leading to insufficient cybersecurity’. As a remedy to this problem, the Act goes on to state (in recital 12):

Organisations, manufacturers or providers involved in the design and development of ICT products, ICT services or ICT processes should be encouraged to implement measures at the earliest stages of design and development to protect the security of those products, services and processes to the highest possible degree, in such a way that the occurrence of cyberattacks is presumed and their impact is anticipated and minimised (‘security-by-design’).

It also describes (in recital 13) a need for ‘security by default’:

Undertakings, organisations and the public sector should configure the ICT products, ICT services or ICT processes designed by them in a way that ensures a higher level of security which should enable the first user to receive a default configuration with the most secure settings possible (‘security by default’).

The Act further charges ENISA with playing a ‘central role’ in ‘promot[ing] security-by-design and privacy-by-design at Union level’ (recital 41). Although these exhortations are only part of the Act’s preamble, they underline important elements of its operative provisions. For example, the European cybersecurity certification scheme established pursuant to the Act ‘shall be designed to achieve’, inter alia, that ‘ICT products, ICT services and ICT processes are secure by default and by design’ (Article 51(i)). The references here to ‘secure by default and by design’ are not defined elsewhere in the Act’s operative provisions, so they must ordinarily be construed in line with their explication in the recitals.

Additionally, multiple EU sectoral regulatory instruments expressly or implicitly flag the importance of SbD. Examples are the Medical Devices Regulation 2017, the Digital Content Directive 2019 (DCD), the Financial Markets Directive 2014 and the Electronic Identification, Authentication and Trust Services (eIDAS) Regulation 2014. SbD-related norms are also found in recent legislative initiatives concerning AI, revision of the NISD, revision of the Machinery Directive 2006 and the proposed Data Act.

It is also striking to observe sporadic legislative references to SbD as a ‘principle’. One example is the Electronic Communications Code 2019 (ECC). This lays down as a core obligation that ‘providers of public electronic communications networks or of publicly available electronic communications services take appropriate and proportionate technical and organisational measures to appropriately manage the risks posed to the security of networks and services’ (Article 40(1); see also recitals 94–95). To this end, recital 97 ECC singles out encryption as an example of such measures and expressly links it to SbD (along with PbD), stating, inter alia, that ‘where necessary, encryption should be mandatory in accordance with the principles of security and privacy by default and by design’ (emphasis added). Another example is Regulation 2021/887 establishing the European Cybersecurity Industrial, Technology and Research Competence Centre and the Network of National Coordination Centres. This stipulates ‘promoting … the principle of security by design’ as one of the Competence Centre’s objectives (Article 4(2)(b)), while recital 37 refers to the importance ‘that security by design is used as a principle in the process of developing, maintaining, operating and updating infrastructures, products and services …’.

What are the implications of using the word ‘principle’ to denote SbD? Is this terminology merely cosmetic in significance or does it signal a heightened normative status for SbD? I consider these questions in the next section.

3.2 Security by Design as Hard Law Principle?

The exact meaning of a ‘principle’ under EU law varies from context to context and is somewhat diffuse. In legal philosophy, a distinction is commonly made between ‘principles’ and ‘rules’: the former tend to depict norms with a higher level of generality, abstraction, open-endedness, persistence and explicit value-embodiment than rules, and they often help ground and justify the latter. Applying Alexy’s oft-cited perspective, for example, a principle is an ‘optimisation command’ or ‘ideal ought’ demanding realisation of a particular goal to the greatest extent possible, whereas rules are more concrete, conclusive ‘definitive commands’ with which compliance can be assessed relatively easily. While this distinction is contested, it is highly influential, informing much of the general constitutional legal discourse of continental Europe—and, increasingly, equivalent Anglo-American discourse as well.

Viewing SbD as a principle in the sense of ‘optimisation command’ makes apparent logical sense in light of the way the mantra is formulated in particular legal instruments, such as the Cybersecurity Act and its aforementioned recitals. Yet, as Von Bogdandy observes, one must keep in mind the distinction between what a principle is for the purposes of philosophy (or, indeed, for the purposes of applying technical standards) and what a legal system actually recognises as a principle. Mindful of this distinction, a legal formalist may rightly query whether the aforementioned references in the ECC and Regulation 2021/887 to SbD as a principle properly elevate SbD into the pantheon of regulatory principles recognised in EU law. The ECC mentions SbD only in its recitals, as opposed to its operative provisions, and recitals are not legally binding in themselves. The operative provisions of Regulation 2021/887 refer to SbD as a principle, yet the phrasing of recital 37 in the regulation suggests that this reference may simply be intended to denote an engineering approach or method—akin to, say, the ‘end-to-end’ (e2e) principle (explained in Section 4.2) that has steered the design of core internet architecture—as opposed to an overarching (or undergirding) legal norm. While SbD is doubtless a principle in the former sense, this does not necessarily mean it is a principle in the latter sense. Further, there is an almost inflationary use of ‘principles’-based language linked to SbD and related ‘by design’ mantras, with little indication of what the notion of ‘principle’ or that to which it is attached precisely means. A cautious legal formalist could easily be forgiven for instinctively viewing this development as an essentially rhetorical fad rather than the emergence of a stable norm properly embedded in law.

Even if neither the ECC nor Regulation 2021/887 properly establish SbD as a legally binding ‘optimisation command’, they portend at the very least a development of SbD in that direction. The Commission’s proposal to replace the NISD regime with a new Directive (NIS2D) also refers to SbD as a ‘principle’, and in doing so, uses much the same formulation as the ECC. The Council’s recently agreed General Approach in regard to that proposal takes the same line. This indicates that signs are emerging of a legislative consolidation of this development.

In my view, however, strong evidence exists that SbD is already a hard law principle for the purposes of EU law. This is most obvious in respect of security of personal data. Articles 32, 25 and 5 GDPR are the key constituents of this evidence, as the following section shows.

3.3 Security by Design as Hard Law Principle in Respect of Personal Data

Article 32 GDPR requires security measures on the basis of context-specific risk assessment: ‘In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing …’ (Article 32(2); see also recital 83 GDPR). Admittedly, Article 32 does not expressly stipulate when risk assessment should take place; in theory, such assessment could simply occur after a data processing system is established. Nor does Article 32 expressly refer to ‘design’. However, Article 32(1) requires that account be taken of the ‘state of the art’. What does this criterion mean? Some language versions of the GDPR give the impression that ‘state of the art’ applies solely to the state of technology or technical measures. For example, the German formulation for ‘state of the art’ is ‘Stand der Technik’, the Danish formulation is ‘det aktuelle tekniske niveau’ and the Norwegian formulation is ‘den tekniske utviklingen’. However, the criterion extends also to organisational measures. In very general terms, the criterion means that security measures should be in accordance not only with the current state of technological advancement, but also with contemporary practices, standards and other norms for technical and organisational management that are generally regarded as offering the optimal degree of security at the time. Most importantly (for present purposes), current international technical standards and management system standards indicate that effective risk assessment must be conducted proactively. For instance, the risk management guidelines adopted by the International Organization for Standardization (ISO) state as a general principle that ‘[r]isk management anticipates, detects, acknowledges and responds to … [risks] in an appropriate and timely manner’. A proactive, design-focused approach to risk management is also endorsed by the ISO and International Electrotechnical Commission (IEC) in their standards dealing specifically with information security. Overall, these standards pitch risk management as concerned with anticipating and planning for reasonably foreseeable events. Thus, risk management ought to be initially conducted at the stage when a system for processing personal data is being conceived, not when a system is already created and operational. An ex ante ‘design’ element is implicit in this vision of risk management. Article 32 embodies this element and gives it legally binding force.

At the same time, Article 32 may be regarded as an offshoot of the general DPbDD requirements laid out in Article 25 GDPR, along with the accountability requirements of Articles 5(2) and 24(1) GDPR. Hence, Article 32 should also be read in light of these requirements. Article 32 has basically the same formulation as Article 25, albeit with a narrower compass and a lack of wording expressly addressing when its requirements should apply. That omission is of little substantial consequence given the explicit risk management focus of Article 32 which—as just indicated—demands proactive consideration of security at the outset of establishing a system for processing personal data. Article 25 is more explicit on this point: its requirements apply ‘at the time of the determination of the means for processing’ (Article 25(1)), as well as at the time of the processing itself. Thus, they must first be implemented at the planning stage for processing. According to the European Data Protection Board (EDPB), ‘[t]he “means for processing” range from the general to the detailed design elements of the processing, including the architecture, procedures, protocols, layout and appearance’. The Board goes on to spell out SbD as one constituent in the consideration of means: ‘Security by design – Consider security requirements as early as possible in the system design and development and continuously integrate and perform relevant tests’.

Turning more directly to SbD’s status as a principle for the processing of personal data, it is doubtful this status is established by Article 32 alone. However, Article 32 should be seen as an elaboration of the principle of ‘integrity and confidentiality’ in Article 5(1)(f) GDPR, which is definitely one of the core principles of the GDPR and other data protection instruments. The use of the phrase ‘using appropriate technical or organisational measures’ in Article 5(1)(f) not only creates a link to Articles 32, 25 and 24, it also helps usher the design element of the SbD (and DPbDD) mantra into Article 5. This is not to intimate that Articles 5(1)(f) and 32 are fully commensurate. While the latter overlaps with and embodies Article 5(1)(f), only controllers (not controllers and processors) bear primary responsibility for complying with Article 5 principles (Article 5(2) GDPR). Additionally, Article 5(1)(f) lays down a general principle for data processing whereas Article 32 concerns the measures that must be implemented to ensure observance of that principle.

Further, with respect to personal data, SbD constitutes a facet of DPbDD, which must also be considered a principle of the EU legal regime. Admittedly, Article 25 GDPR is not expressly formulated as a principle (or set of principles), but rather refers to the ‘data-protection principles’ elsewhere in the Regulation (primarily those in Article 5). Apart from the implicit connection between Article 5(1)(f) and Articles 25, 32 and 24, Article 5 principles fail to reference DPbDD. However, provisions elsewhere in the GDPR list ‘data protection by design and by default’ as belonging to the ‘general data protection principles’ (Article 47(2)(d)). Other EU instruments also refer to DPbDD as belonging to the ‘core principles’ of the GDPR. Accordingly, solid grounds exist for treating SbD as a basic principle of EU data protection law.

3.4 The Role of Courts

Jurisprudence of the Court of Justice of the EU (CJEU) and the European Court of Human Rights (ECtHR) bolsters the legal status of SbD. Both courts have injected SbD-related ideals into Europe’s constitutional framework for fundamental rights protection. The ECtHR judgment in I v Finland (2008) is seminal in this respect. In this case, the Court held that Finland breached its positive obligation to ensure respect for private life under Article 8 ECHR because of a failure to provide ‘practical and effective protection to exclude any possibility of unauthorised access’ to patient data at a public hospital. Although the judgment did not explicitly reference SbD or closely related ‘by design’ mantras such as DPbDD, its thrust necessitates an approach in line with their ideals. In effect, the judgment renders SbD (and, concomitantly, DPbDD) an essential requirement of a state’s positive obligations to secure respect for the right laid out in Article 8 ECHR and imposes a high threshold for meeting that requirement, at least in relation to ensuring confidentiality of data concerning a person’s health. Good grounds exist, in terms of both lex lata and lex ferenda, for holding that these obligations also extend to other types of personal data and to functionalities other than just maintaining data confidentiality. In subsequent case law, the ECtHR has specified, in fairly generic terms, ‘procedures for preserving the integrity and confidentiality of data and procedures for its destruction’ as amongst the safeguards required by Article 8(2) ECHR.

The CJEU is likely to take a similar approach with respect to the obligations flowing from Articles 7 and 8 CFREU, along with Article 16 of the Treaty on the Functioning of the EU. This is because of the so-called homogeneity clause in Article 52(3) CFREU, combined with judicial confirmation that Article 7 CFREU (which provides for the right to respect for private and family life, home and communications) ‘must therefore be given the same meaning and the same scope as Article 8(1) of the ECHR, as interpreted by the case law of the ECtHR’. Moreover, in its landmark decision in the Digital Rights Ireland case, the CJEU strongly implied that the ‘essence’ of the right to data protection laid down in Article 8 CFREU requires respect for ‘certain principles of data protection and data security’, meaning that ‘Member States are to ensure that appropriate technical and organisational measures are adopted against accidental or unlawful destruction, accidental loss or alteration of the data’. Interestingly, the Court’s reference to ‘principles of … data security’ is derived from the language of the former Data Retention Directive (DRD), which the Court nullified in the case. Article 7 DRD described the data security norms it set out as ‘principles’. Two examples of these principles were: ‘the data shall be subject to appropriate technical and organisational measures to protect the data against accidental or unlawful destruction, accidental loss or alteration, or unauthorised or unlawful storage, processing, access or disclosure’ and ‘the data shall be subject to appropriate technical and organisational measures to ensure that they can be accessed by specially authorised personnel only’ (Articles 7(b) and (c)). The Court found these ‘principles’ to be insufficiently detailed and stringent to satisfy the requirements of Article 8 CFREU in light of the nature and quantity of the data being processed:

[Article 7 DRD] does not lay down rules which are specific and adapted to (i) the vast quantity of data whose retention is required by that directive, (ii) the sensitive nature of that data and (iii) the risk of unlawful access to that data, rules which would serve, in particular, to govern the protection and security of the data in question in a clear and strict manner in order to ensure their full integrity and confidentiality.

Additionally significant is the Court’s stance that the data security norms of the DPD and Electronic Privacy Directive did not weigh up for this failing because they allowed electronic communications providers to take account of ‘economic considerations … as regards the costs of implementing security measures’. Arnbak has rightly observed that the Court thereby ‘elevated’ security of communications data ‘to a central safeguard under Article 8 EU Charter—a matter of ex ante legislation to ensure security rather than relying on delegation, economic considerations of market operators, nor [sic] ex post incident recovery through other means of law’.

Jurisprudence of Germany’s Federal Constitutional Court (Bundesverfassungsgericht) has also helped to heighten the legal status of data security, both in German law and EU law. In 2008, the Court recognised a ‘fundamental right to the guarantee of the confidentiality and integrity of information-technology systems’ (‘Grundrecht auf Gewährleistung der Vertraulichkeit und Integrität informationstechnischer Systeme’) as part of the German Constitution. This right is grounded in and a manifestation of the more general right of personality (‘Persönlichkeitsrecht’) provided under Articles 1(1) and 2(1) of the German Constitution, yet it has a distinct security dimension demanding protection of both the confidentiality of an information-technology system and the system’s integrity. The Court defined the latter property in terms of preventing the system from being ‘accessed such that its performance, functions and storage contents can be used by third parties’, meaning, in turn, that ‘the crucial technical hurdle for spying, surveillance or manipulation of the system has then been overcome’.

There is strong evidence that the Court’s judgment also helped lift data security into the collection of core data protection principles set out in Article 5 GDPR. This evidence inheres not only in the similarity between the Court’s nomenclature for the constitutional right it recognised and the terminology of Article 5(1)(f) GDPR, but also in the regulation’s preparatory works. It is further likely that the judgment informed Digital Rights Ireland, even if the latter made no express reference to it. Note, though, that the Bundesverfassungsgericht did not embrace SbD as a constitutional norm; it simply elevated a form of cybersecurity to such a norm, passing over the status of the design or means for such security. Yet, by heightening the legal importance of cybersecurity, the judgment implicitly heightened the need for measures to protect the legal right at hand, which necessarily implicates design processes. Later jurisprudence of the Court has recognised these implications.

3.5 Security by Design as Hard Law Principle with Respect to Non-personal Data

What of SbD’s status under EU law in relation to the security of non-personal data? Can SbD also be properly regarded as a legally binding principle in this context? The answer is less clear cut and varies according to actor and sector, as indicated by the overview presented in Section 3.1. The NISD lays out the least sector-specific rules in point, yet, as noted in the foregoing, these are nonetheless actor-specific, applying only to ‘operators of essential services’ and ‘digital service providers’. The security requirements for the latter category of actor are, as a point of departure, lighter and more flexible than those for the former category. Recital 53 NISD states that digital service providers that are micro- and small enterprises may escape the Directive’s security requirements altogether. Accordingly, Articles 14(1) and 16(1) NISD—which, it will be recalled, contain rules paralleling Article 32 GDPR—do not apply to such enterprises, nor to a large array of other actors, such as manufacturers of electronic equipment or software developers.

Furthermore, the Directive does not contain provisions that, in the vein of Article 25 GDPR, make the design element explicit. This stunts the Directive’s ability to promote and act as a proxy for SbD ideals. Recital 51 of the Directive also stipulates that ‘[t]echnical and organisational measures imposed on operators of essential services and digital service providers should not require a particular commercial information and communications technology product to be designed, developed or manufactured in a particular manner’. This could be read as playing down the role of ex ante design. On the basis of the recital, Arnbak went so far as to claim that ‘the concept of security by design is clearly excluded from the regulatory measures’ concerned, although he failed to define precisely what he meant by ‘security by design’. Alternatively, I view recital 51 as being intended merely to signal that the requirements of Articles 14(1) and 16(1) NISD are commercially agnostic and thus open-ended in terms of innovation and ‘research and development’ (R&D). Thus, in my opinion, recital 51 does not necessarily exclude SbD.

More generally, it bears emphasis that risk management is key to the Directive’s ethos: ‘A culture of risk management, involving risk assessment and the implementation of security measures appropriate to the risks faced, should be promoted and developed through appropriate regulatory requirements and voluntary industry practices’ (recital 44 NISD; see also recitals 46 and 49). In addition, Articles 14(1) and 16(1) both state that their requirements are to take account of the ‘state of the art’, thus implying that the ISO/IEC standards described in Section 3.3 are also relevant here. The ‘front-foot’ approach endorsed by those standards is reinforced in Articles 14(2) and 16(2) NISD, which focus not just on minimising the impact of security breaches, but on preventing such breaches in the first place. Overall, while the Directive fails to mention SbD explicitly, it implicitly embraces and enforces its ideals.

These ideals are more salient in the Commission’s proposal to overhaul the NISD regime by replacing it with a new Directive (NIS2D). Recital 54 of the NIS2D proposal expressly links the ‘principle’ of SbD to the requirements of Article 18 in the proposed directive. Article 18 (entitled ‘cybersecurity risk management measures’) basically replicates Articles 14(1) and 16(1) NISD, but with some additions. Like the latter, Article 18 of the NIS2D proposal does not expressly mention ‘design’. However, it adds security criteria that strengthen the need for early planning and, hence, design. Particularly pertinent in this regard are express requirements of ‘risk analysis’ (Article 18(2)(a)), ‘supply chain security’ (Article 18(2)(d)), and ‘security in network and information systems acquisition, development and maintenance, including vulnerability handling and disclosure’ (Article 18(2)(e)). As stated earlier, these criteria are arguably not new additions to extant law but simply spell out criteria already inherent in the current Directive. The Council’s General Approach to the NIS2D proposal basically endorses its thrust, albeit with some further additions, such as express recognition of the need for an ‘all-hazards’ approach to security—that is, ‘protection for network and information systems and their physical environment from any event that could compromise the availability, authenticity, integrity or confidentiality of stored, transmitted or processed data or of services offered by, or accessible via, network and information systems’ (Article 18(1a)).

The NIS2D proposal dispenses with the current Directive’s focus on, and distinction between, operators of essential services and digital service providers (see Article 2 and Annexes 1 and 2). Hence, if the proposal is adopted, SbD will gain wider purchase as a legally binding norm with respect to systems that process non-personal data. It will, for instance, apply to manufacturers of electronic equipment (see Annex 2). The traction of SbD will likely be amplified further when and if other related legislative initiatives are adopted as well. One significant initiative in this regard is the Commission’s recent use of its delegated powers under the Radio Equipment Directive 2014 (RED) to extend the reach of some of the Directive’s ‘essential requirements’ for lawful placement of radio equipment on the European market, so that these requirements address, in effect, the cybersecurity capabilities of a wide range of IoT devices, including connected toys and wearables. Also noteworthy are the proposals for an Artificial Intelligence Act, a Cyber Resilience Act covering connected devices, a Data Act, and a new directive on the resilience of critical entities (CED). In addition, as outlined in Section 3.1, a range of other sectoral rule sets already embrace SbD. These are supplemented by the Cybersecurity Act, which posits SbD as a key agenda item both for ENISA’s operations and for the certification scheme the Act establishes, and by Regulation 2021/887, which places promotion of SbD firmly within the remit of the European Cybersecurity Industrial, Technology and Research Competence Centre.

Looked at holistically, the current regulatory framework for the security of systems that process non-personal data still resembles a patchwork quilt with uneven patterns and thicknesses. While SbD provides a strong visible thread for many patches of the quilt, other patches remain threadbare. This makes it challenging to conclude firmly that SbD may now properly be regarded as a fully fledged, hard law principle with a broad horizontal scope in respect of such systems. However, the appearance of SbD ideals across a variety of disparate sectoral instruments—from rules on medical devices to rules on financial services to rules on AI systems—indicates the emergence of SbD as a trans-sectoral regulatory principle. This development is likely to become more pronounced in the near future.

It is also a welcome development, not least because it will help ameliorate a shortcoming in the scope of SbD application pursuant to data protection law. As noted in Section 3.1, SbD requirements imposed by EU data protection law fall on the shoulders of data controllers and processors. Much the same applies to DPbDD requirements. This is a problem, inasmuch as many fundamental decisions in IS design and development are not made by these actors. Admittedly, recital 78 GDPR does refer to ‘producers’ of products, services and applications that involve processing of personal data, but these entities (to the extent they are neither controllers nor processors) are merely ‘encouraged’ to respect DPbDD and, implicitly, SbD. The degree to which they are formally obligated to integrate security considerations into their design decisions, also in respect of products, services and applications that involve processing of non-personal data, varies considerably, as this section indicates.

3.6 Security by Design As Hard Law Norm Beyond Europe

Legislative support for SbD ideals is not limited to the EU. For example, in 2017, Israel adopted landmark regulations pursuant to its data protection law that implicitly mandate an SbD-attuned mindset in respect of databases containing personal data. The regulations address security measures for personal data in a more elaborate manner than the GDPR does. In line with current risk management standards, they require, inter alia, database controllers and processors to identify and document the kinds of personal data in their databases, assess risks to the security of the data, classify the data according to these risks, formulate written procedures for addressing the risks, put the procedures into practice, conduct follow-up risk assessments periodically and remedy any new vulnerabilities identified.

Another example is legislation enacted in 2018 by California mandating, in effect, SbD for ‘connected devices’. These are defined as ‘any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address’ (§1798.91.05(b)). The legislation has been characterised as ‘aimed mostly at forbidding devices from using generic passwords’. This is misleading: the legislation encompasses much more than authentication. Its core requirement (set out in §1798.91.04(a)) is as follows:

A manufacturer of a connected device shall equip the device with a reasonable security feature or features that are all of the following:

(1) Appropriate to the nature and function of the device.

(2) Appropriate to the information it may collect, contain, or transmit.

(3) Designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.

This is supplemented by an authentication requirement (set out in §1798.91.04(b)) that—as Veale and Brown indicate—is essentially aimed at dispensing with the use of generic passwords. Despite significant carve-outs, the legislation is especially significant given the paramount role of California-based corporations in shaping and developing digital architecture around the world. In effect since 1 January 2020, the legislation has caused confusion and panic over its implications, particularly owing to lack of authoritative guidance on how it is to be construed.

4. Reasons for Greater Focus on SbD

4.1 Intuitive Appeal

The preceding sections show a distinct upswing of interest in pushing SbD specifically, not just as an engineering method and public policy ideal but also as a hard law norm. Why has this upswing of interest occurred? Multiple factors are at play and many of them point to a pressing need for SbD and for greater legal support of its ideals.

The primary reason for the popularity of SbD is that it seems intuitively sensible. Building security mechanisms into IS architecture from the ground up and thereby averting security breaches that could be extremely costly to fix afterwards is a strategy that most people would find difficult to fault. The appeal of this strategy increases in line with the growing societal importance of the systems to which it applies. Accordingly, SbD has greater intuitive appeal with respect to systems that constitute or help constitute ‘critical infrastructure’. At the same time, it is becoming increasingly difficult technically, logically and economically to draw clear-cut distinctions between infrastructure that is ‘critical’ and other infrastructure. This widens the potential appeal of SbD. The deployment of the IoT is a case in point. Thus, it is not surprising that many technical standards bodies and regulatory authorities first flagged SbD as a remedy for IoT-related security challenges, as described in Section 2.

4.2 Scandal, Economics and Design Constraints

Another factor is the disturbingly large number of scandalous security breaches that have come to light in recent years. Hardly a week goes by without news of a major data ‘hack’, ‘breach’ or ‘leak’. These incidents often evidence lax attitudes towards cybersecurity, also in respect of large collections of sensitive data. Such complacency reflects, in turn, a range of other factors.

Especially important is the paucity of strong economic incentives for companies and other profit-driven bodies to invest up front in cybersecurity. The penalties—legal or otherwise—that organisations face for failing to implement adequate security measures are often affordable, especially for large wealthy corporations, although the degree of affordability will differ from jurisdiction to jurisdiction. Manufacturers of hardware and software frequently view security investment as an unnecessarily expensive drag on product development and deployment; accordingly, they will initially tend not to make the security of their products a high priority in their struggle to gain market traction. This pattern is exacerbated by the fact that manufacturers often escape having to bear the full economic costs of security breaches arising from flaws in their products.

Further, Asghari and others note that when marketing software, vendors:

will lure customers with bells and whistles that are visible features or provide convenience. Security is rather intangible and does not easily fit into these considerations; it might even reduce functionality.

Insofar as security is a selling feature, it tends to give rise to a classic ‘lemons market’ in Akerlof’s terms. This market is characterised by informational asymmetry between vendors and consumers, with the latter typically unable to assess the true value of the security feature(s) being marketed and thus unwilling to pay a high price for the more secure product, which in turn discourages vendors from offering the product. Exacerbating this informational asymmetry is reluctance by many companies to share information about cybersecurity breaches and vulnerabilities they experience or know about. The interest of particular national security agencies in exploiting such vulnerabilities buttresses and feeds off this problem.

Yet another factor contributing to the extensive number of security breaches is that basic internet architecture has been constructed with simplicity, flexibility, openness and resilience predominantly in mind, not with physical or logical fencing. This is less a symptom of the economic factors outlined earlier and more a reflection of the design philosophy of the computer scientists responsible for constructing the original internet—that is, the publicly accessible network based on the transmission control protocol (TCP)/internet protocol (IP) suite. Central to this philosophy is the ‘end-to-end’ principle, which basically posits that the medium for data transmission should be kept simple and focus only on transferring data packets efficiently; other applications (‘intelligence’) should be provided at the network ‘endpoints’. Thus, ‘[t]he network’s job is to transmit datagrams as efficiently and flexibly as possible. Everything else should be done at the fringes’. In this context, robust security has been relegated to the category of costly intelligence and left for insertion at the network ‘endpoints’, not in the network ‘pipes’.

More generally, concern for IS security has traditionally struggled to gain strong, widespread traction in the engineering community. This seems to be partly because of the feeling amongst a considerable number of engineers that the integration of security mechanisms into IS architecture is neither their primary responsibility nor pleasurable. Thus, engineering culture exhibits particular traits that weaken security mechanisms’ grip in IS design. These traits also have consequences for the future traction of SbD ideals. I return to these traits in Section 6.2.

4.3 Regulatory (and Rhetorical) Trends

Finally, the growing popularity of specific regulatory strategies has inspired and moulded the upswing of interest in promoting SbD. Especially significant in this respect is the increasing embracement of a ‘risk-based’ approach to regulation, which involves the integration of risk management measures in legislative frameworks. It champions a pre-emptive, ex ante facto regulatory stance that prompts regulatees to anticipate and mitigate the risk of unwanted events, as opposed to a reactive, ex post facto strategy geared to allocating responsibility and liability after such events occur. The approach creates fertile soil for sowing SbD and related ‘by design’ ideals as a part of risk management measures. As Section 3.3 indicates, the data protection field exemplifies this development well, with the GDPR showing a pronounced concern for both DPbDD and SbD and for ensuring that risks to fundamental rights be properly managed by data controllers and processors prior to, and throughout the lifecycle of, the processing of personal data.

Accompanying this development is a burgeoning set of ‘by design’ discourses that have helped spur or shape the increased deployment of risk-based and design-based approaches to regulation. The upswing of popularity of SbD in these emerging regulatory frameworks is linked most closely to the growing salience of the interrelated discourses on PbD and DPbDD. Both of these discourses are aimed at ensuring that proper care is taken of privacy-related interests during the various stages of IS development. In turn, these discourses feed into, and off, broader interdisciplinary endeavours, such as ‘value-sensitive design’ and ‘responsible innovation’, which seek to embed important human values, particularly those central to virtue ethics, in the technology design process. On the legal plane, these endeavours resonate with the vision of ‘Ambient Law’ promoted by Hildebrandt and Koops, whereby legal safeguards for privacy-related interests, human autonomy and rule of law are to be ‘inscribed’ into the ‘socio-technical infrastructure’ during its construction so that it articulates and preserves those safeguards. Additionally, there are calls for ‘by design’ norms emerging in a growing number of other, relatively specific contexts, including enforcement of intellectual property rights, government administration, algorithmic regulation, robotics regulation and AI regulation.

Central to the rationale for all of these endeavours and discourses is the recognition that technology plays a key role in setting the parameters for human conduct and that it can often shape our behaviour more effectively than legal norms. Thus, embedding legal norms or ideals in IS architecture and other technology is assumed to improve their traction considerably, in part by helping automate their application. Linked to this assumption is the promise of a reduction in the degree to which technological–organisational developments outpace legislative efforts.

With its relatively technocratic focus, SbD does not fit squarely with all variants of these discourses (especially those concerned with the technological embedment of virtue ethics). Nor, concomitantly, have all of these variants played an equally important role in catalysing the growth and popularity of the SbD mantra. As already indicated, the PbD and DPbDD variants have played the largest role in the latter regard. This is because SbD has frequently been paired with or conceptually baked into them, especially the PbD variant. Examples of pairing occur in some of the aforementioned EU regulatory instruments. As for conceptual merger, this is best exemplified in Cavoukian’s work. One of the seven foundational principles making up her influential conception of PbD is ‘End-to-End Security’, which she describes as follows:

Privacy by Design, having been embedded into the system prior to the first element of information being collected, extends securely throughout the entire lifecycle of the data involved—strong security measures are essential to privacy, from start to finish. This ensures that all data are securely retained, and then securely destroyed at the end of the process, in a timely fashion. Thus, Privacy by Design ensures cradle to grave, secure lifecycle management of information, end-to-end.

The conceptual merger of SbD with PbD has continued into the organisational sphere, with, inter alia, Cavoukian’s establishment in 2016 of an International Council on Global Privacy and Security by Design.

It must be remembered, though, that privacy and data protection are not fully commensurate with security and cybersecurity. While the latter is a component of a privacy or data protection regime, such a regime also embraces other rules and measures (eg concerning the transparency and fairness of data processing practices). Moreover, cybersecurity on its own may serve a broader range of concerns than data protection and, in particular contexts, these concerns may even conflict with privacy-related interests. It also bears emphasis that Cavoukian’s conception of PbD is not necessarily fully commensurate with EU conceptions of DPbDD. The former reflects North American views and realities that can differ, sometimes subtly, from those in Europe, and it is not tied to particular legislation. In contrast, DPbDD, as manifest in Article 25 GDPR, Article 20 LED and Article 27 EUIDPR, is a legally autonomous construct of EU law, and the measures it requires are tethered to EU legal standards.

5. The Semantics of Security by Design

5.1 Introduction

On the surface, the SbD mantra seems fairly self-explanatory: it denotes a result (‘security’) that is achieved by action(s) (‘design’) undertaken to achieve that result. Further reflection, though, reveals a range of problems that can stymie its smooth operationalisation as a legal principle. Some of these are directly semantic; for example, what precisely is ‘security’? Some concern the assumptions that the mantra suggests, such as the assumption that security is actually possible to achieve by design. The latter set of problems is dealt with in Section 6, while this section deals with semantic problems. Both sets of problems are approached in an exploratory way with a view to parsing general conceptual and methodological issues raised by SbD and with a view to casting light on how SbD may be understood in law (primarily EU law), but without pretending to arrive at clear-cut definitive conclusions on its legal interpretation.

The existence of a sizeable and expanding body of design-focused regulatory discourse, as outlined in Section 4.3, provides an increasingly rich set of reference points for assessing the meaning, mechanics and normative dimensions of SbD. In the following, I draw mainly upon the PbD and DPbDD discourses for this assessment, as they lie closest to the ideals of SbD.

5.2 The Meaning of ‘Security’

The semantics of the term ‘security’ are contested and often contentious. The term has been labelled ‘overloaded’, ‘slippery’ and ‘perilously capable of meaning all things to all comers’. The chameleon-like character of its semantics largely reflects the fact that security is usually not an end in itself but a proxy for some other interest or value—sometimes termed ‘referent object’—such as personal privacy, human safety, business profitability or state sovereignty. The nature of such interests or referent objects will shape the connotations of the term in a given context and how broadly drawn the term is. The peculiarities of the legal–regulatory culture in which the term is used may also impact the degree of conceptual clarity offered. The evolution of EU regulatory policy on cybersecurity illustrates this dynamic well. As Arnbak notes, this policy has developed without a ‘coherent understanding at the EU level about how to define “security”, and how its underlying values operate, relate or should be interpreted’, a state of affairs that ‘has allowed powerful actors to paint communications security any color [sic] they like’.

Pinning down and parsing the term ‘security’ is vital to operationalise the SbD mantra usefully. At a very abstract level, security generally denotes an ‘absence or limitation of vulnerabilities or threats’. However, there is an obvious need to define the term in considerably greater detail to make proper sense of it, particularly as part of the SbD mantra. As the foregoing sections show, SbD is almost exclusively flagged in connection with digital information systems. Accordingly, the legal manifestation of SbD ideals occurs predominantly in law dealing with the security of information systems and the data and information they process. The following consideration of security semantics focuses on that particular context, even though the SbD mantra is sufficiently generic in the abstract to be applied in other contexts.

In relation to data, information and information systems, security is, at the very least, to be understood as safeguarding the confidentiality, integrity and availability of the assets concerned—in other words, the classic ‘CIA’ triad of security properties. A well-nigh universal agreement exists on their central role in this context. At the risk of spelling out the obvious, confidentiality basically indicates that the assets are protected from unauthorised disclosure, integrity basically describes their protection from unauthorised modification and availability denotes that they are accessible and usable on demand by authorised actors, systems or programmes. Closely related properties that may be, and frequently are, added to this mix include resilience (the ability of an IS to deliver intended outcomes despite security breaches), reliability (the property of the IS behaving and producing results consistently and as intended), authenticity (the property that an entity (data, information or actor) is what it claims to be) and non-repudiation (‘the ability to prove the occurrence of a claimed event or action and its originating entities’).

The degree to which all of the latter four properties ought to be regarded as core security dimensions is debatable; some are arguably concerned primarily with the dependability and robustness of IS or with holding actors to account rather than keeping assets secure. The same applies to the property of auditability (the ability to trace all actions concerning a given asset), which some influential explications of security criteria emphasise. Moreover, at least one of the properties—resilience—may be in tension with security in certain contexts. The development of the original internet exemplifies this tension well. As noted in Section 4.2, that network was designed primarily with a view to ensuring its overall robustness and resilience, at the expense of ensuring the security of the individual network nodes. It is not far-fetched to envisage other situations where there is a need, say, to compromise or sacrifice the security of a device to maintain the resilience of a larger IS to which that device is connected. This suggests that resilience is not necessarily a faithfully subordinate constituent of security, especially when applied as a network goal. Nonetheless, resilience, dependability, robustness and security tend to be closely interlinked both in theory and practice, while accountability mechanisms play an important part in ensuring security ‘on the ground’. Thus, it clearly makes sense to treat them as elements in the operationalisation of SbD.

5.3 The Relationship between Security and Safety

Another issue is whether safety can or ought to be readily incorporated into the SbD mantra. The issue is complicated by the fact that many languages other than English fail to distinguish between security and safety. German, for example, uses ‘Sicherheit’ for both concepts, Spanish uses ‘seguridad’, Norwegian ‘sikkerhet’ and Italian ‘sicurezza’. In English, safety commonly refers to protection from threats to personal health or physical integrity arising as a result of natural disasters or benign human error, whereas security is often reserved for protection from malicious human conduct. However, these definitional distinctions vary from one discipline, industry or policy domain to another and are frequently blurred. In some domains, the distinction between safety and security may be based primarily on whether the threat is intentional; in others, it may depend primarily on whether the threat is extraneous to a particular system or on the degree of potential harm involved. Regardless, a well-established line of division persists amongst engineers in the computer industry, between safety experts ‘who see their role as preventing losses due to unintentional actions by benevolent actors’ and security experts ‘who see their role as preventing losses due to intentional actions by malevolent actors’. That said, the methodologies employed by each category of expert share much common ground.

The relationship between security and safety in the EU legislative context has mainly become relevant in discussion over the ambit of product safety laws, such as the EU Toy Safety Directive 2009. These are traditionally regarded as addressing threats to personal health and physical integrity that are accidental or not maliciously induced. Consumer protection advocates view such legislation in its current form as failing to tackle cybersecurity-related vulnerabilities directly. Under this view, an internet-connected or ‘smart’ toy may be deemed safe for the purposes of the Toy Safety Directive even though it suffers from a security flaw with respect to its internet connectivity. To remedy this shortcoming, scholars, consumer groups and other policy entrepreneurs have argued in favour of greater legislative recognition of the fact that cybersecurity is, in fact, a prerequisite for safety, particularly in an IoT context. This ‘security-for-safety’ approach to regulation makes sense when an increasing range of physical devices that fall within the purview of product safety legislation—from heart pacemakers to cars and refrigerators—are becoming ‘connected’ as cyberphysical systems.

Yet, the propriety of a ‘security-for-safety’ approach to regulation does not necessarily resolve the issue of whether safety ought to be a necessary element of the SbD mantra. That issue concerns, rather, the propriety of a ‘safety-for-security’ approach. Nonetheless, the latter approach has much to commend as a matter of lex ferenda. In principle, there is no reason to exclude concern for safety from the embrace of SbD. It also makes practical sense given the growing complexity of information systems and the vulnerabilities they face. As Young and Levenson have observed:

Today’s increasingly complex, software-intensive systems … are exhibiting new causes of losses, such as accidents caused by unsafe interactions among components (none of which may have failed), system requirements and design errors, and indirect interactions and systemic factors leading to unidentified common-cause failures of barriers and protection devices. Linear causality models and the tools built upon them, like fault trees, simply lack the power to include these new causes of losses.

These developments require the adoption of a relatively holistic, systemic perspective. Omitting safety from security considerations (and vice versa) may result in unfortunate blind spots that undercut the ability to identify and plan for a large, disparate range of vulnerabilities.

As highlighted in Section 2.1 of this article, ENISA certainly pitches SbD as embracing safety, yet so far, there has been little recognition of an explicit safety component in legislative elaborations of SbD-related norms. The bulk of EU legislative instruments that flag SbD or SbD-related ideals use the CIA triad to describe the chief properties of ‘security’, albeit with some additions, but without express mention of ‘safety’. Accidental incidents sometimes figure under the rubric of security, and some legal instruments fail to differentiate on their face between protection from intentional or malicious threats and other types of threats. Therefore, these instruments may already implicitly cater to safety considerations. Further, EU legislators show increasing awareness of the need for more holistic approaches to security regulation generally. This is exemplified by the Council’s promotion of an ‘all-hazards’ approach to security in the context of the NIS2D proposal, and the Commission’s proposed extension of European Health and Safety Requirements for machinery products to embrace cybersecurity risks stemming from malicious third party actions.

5.4 Legislative Inconsistency in Framing Security

The EU’s legislative elaboration of security properties is far from consistent. For instance, the definition of ‘security of network and information systems’ in Article 4(2) NISD adds ‘authenticity’ to the CIA triad, while Article 32(1)(b) GDPR adds ‘resilience’ to its security mix but omits ‘authenticity’. Neither the NISD nor the NIS2D proposal include ‘resilience’ in their respective elaborations of security, although they do frame security in terms of an ability ‘to resist’. However, ‘resistance’ is not necessarily the same as ‘resilience’: the former may connote the ability to defend against attack or disruption, whereas the latter connotes the ability to maintain or resume operations during or after an attack or disruption. Nonetheless, ‘resilience’ can be defined so broadly as to encompass both resistance and security, which is the case with the Commission’s proposal for a new directive on the resilience of critical entities. This proposal, along with the NIS2D proposal, also highlights inconsistency in how the relationship between resilience and security is pitched. While the CED proposal seemingly treats security as a property of resilience, the GDPR embraces the converse line. Parts of the NIS2D proposal pitch resilience as a means of achieving greater cybersecurity, while other parts describe the proposal as ultimately concerned with enhancing resilience.

Even embracement of the CIA triad within a single legislative instrument is sometimes incoherent. This is exemplified in the GDPR, which at one point uses ‘security’ as a supplement to ‘confidentiality’, thereby misleadingly suggesting that the latter is not a component of the former (or vice versa). Further, Article 5(1)(f) GDPR—which, as noted in Section 3.3, effectively flags SbD as a core principle for processing personal data—is problematic in terms of both its short-hand nomenclature and substantive provisions. Addressing the latter first, the list of security properties laid out in Article 5(1)(f) ought to have at least included a more explicit reference to the protection of data confidentiality. Furthermore, the inclusion of ‘protection against … unlawful processing’ in that list is out of place for a principle concerned with data security and fits much more squarely within the principle of ‘lawfulness, fairness and transparency’ set out in Article 5(1)(a). The short-hand nomenclature fails to flag ‘availability’, despite the list of referents in Article 5(1)(f) including protection against ‘accidental loss, destruction or damage’ of personal data.

While some of the terminological variations may be justifiable in light of the differing policy objectives of the instruments concerned—particularly in respect of ‘resilience’—finding a rational ground for all of the variations is challenging, to put it mildly. Overall, one is left with an impression of muddle that bespeaks a continuation of the aforementioned conceptual problems that Arnbak observed with EU regulatory policy on communications security prior to 2016. It also muddies the message behind SbD ideals.

5.5 The Meaning of ‘by Design’

Somewhat akin to ‘security’, the concept of ‘design’ is afflicted by slippery semantics. Reflecting on the history of its usage in general, Bruno Latour has observed that ‘design’ has both an expansive and expanding meaning. The capacity for ‘design’ to be defined broadly is exemplified by parts of the PbD discourse. For instance, in his engaging book, Privacy’s Blueprint, Woody Hartzog first defines ‘design’ as ‘processes that create consumer technologies and the results of their creative processes instantiated in hardware and software’, then as ‘how a system is architected, how it functions, how it communicates, and how that architecture, function, and communication affects people’ and further, as ‘the creation of tools to understand and act on current conditions’. Here, Hartzog uses the term to refer to processes, methods and the results of such processes and methods.

Many other contributions to ‘by design’ discourses have tended to focus more on the goals or objects of design than the meaning of ‘design’ as such, leaving the term relatively nebulous. This is the case, for example, with PbD and DPbDD discourses. Somewhat surprisingly, specialist scholarship on IS design has also struggled to reach agreement on what such design entails: ‘no single, all-encompassing definition of either IS or design in IS can be established’. This is despite a ‘rapid growth in interest in the notion of design—and hence in the building of a design science—in IS’.

There is, then, a risk of ‘design’—and, concomitantly, ‘by design’—suffering a similar fate to the term ‘governance’, which is criticised for having become ‘a rather fuzzy term that can be applied to almost everything and therefore describes and explains nothing’. This risk notwithstanding, it is possible to build out a reasonably cogent conceptualisation of ‘design’ such that the term is lifted well above being a semantic nullity and is able to facilitate operationalisation of the SbD mantra (together with similar mantras such as PbD and DPbDD).

To begin with, there is little doubt that ‘design’ generally connotes, at the very least, intentional, directed activity—that is, actions consciously set in motion to meet some articulated or semi-articulated goal. This is particularly so when the term is used in a ‘by design’ mantra. Thus, in the context of SbD, we are talking about intentional security, not incidental or accidental security. SbD may accordingly be contrasted with ‘security by chance’, ‘security by accident’ and ‘security by disaster’.

There is also little doubt that design is generally more than mere intention. This is highlighted by the aforementioned work of Hartzog, along with many other design-oriented scholars. In this respect, Ralph and Wand’s oft-cited explication of ‘design’ is particularly germane. They first provide a definition of the term in its noun form. According to this definition, design is:

a specification of an object, manifested by some agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to some constraints.

Second, as a transitive verb, ‘design’ is ‘to create a design, in an environment (where the designer operates)’. This stripping of ‘design’ to its very basic elements is conceptually useful and makes clear that it goes further than simply intention. It also involves engineering in the generic sense of working to bring something about, and conceptualisation. The engineering and conceptualisation dimensions are especially evident in the phrase ‘by design’.

At the same time, there is little doubt that design is ordinarily not the same as manufacturing, building or constructing, even though these activities are both shaped by, and a manifestation of, design. Yet, delineating precisely where design begins and ends in relation to such activities (or, indeed, other processes) may be challenging. The same goes when attempting to delineate the way(s) in which design happens. Ralph and Wand are fully cognizant of these challenges.

Because design is an activity, rather than a phase of some process, it may not have a discernable [sic] end point. Rather, it begins when the design agent begins specifying the properties of the object, and stops when the agent stops. Design may begin again if an agent (perhaps a user) changes structural properties of the specification or design object at a later time.

Our definition does not specify the process by which design occurs. Thus, how one interprets this scope of activities in the design process depends on the situation. If a designer encounters a problem and immediately begins forming ideas about a design object to solve the problem, design has begun with problem identification. If requirements are gathered in reaction to the design activity, design includes requirements gathering. In contrast, if a designer is given a full set of requirements upfront, or gathers requirements before conceptualizing a design object, requirements gathering is not part of design.

They accordingly go on to note that ‘design practice may not map cleanly or reliably into [sic] the phases of a particular process’, and that this can create problems for software development because computer scientists differ in their views as to what stages of software engineering are ‘design’ stages. Some computer scientists pin software ‘design’ to a discrete and narrowly defined step. For instance, in the traditional ‘waterfall’ model of software development, design typically denotes a process subsequent to requirements specification and prior to programming. Others, however, adopt a radically more expansive view of software design. Freeman and Hart, for example, state:

Design encompasses all the activities involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying complex systems—not just the activity following requirements specification and before programming, as it might be translated from a stylized software engineering process.

Design from this kind of perspective is a lengthy, far-ranging, iterative process.

Defining design becomes even more multiplex when one considers who is involved in design activity. Traditionally, a distinction is drawn between designers of technology and users of technology—a view typical for actor–network theory. In reality, users are inevitably involved in design activity, even if that involvement is not formally recognised in a particular design process. As Waldman observes, ‘design is not complete until users have defined the uses and social valence of the technology in their hands’. Further, Hartzog notes that ‘[g]ood design cannot happen without the participation and respect of all stakeholders, including engineers, artists, executives, users, and lawyers’. These insights constitute a key point of departure for scholarship and practice on ‘participatory design’, which is aimed at encouraging and formalising user involvement in design processes.

In a similar vein—and moving closer to how design processes may operate with regard to cybersecurity—Brass and Sowell promote a vision of ‘adaptive governance’ for managing IoT security risks. This involves users of IoT providing iterative feedback to IoT producers, regulatory authorities and other relevant stakeholders based on their expectations and experiences with these products. More generally, Brass and Sowell state:

Changes in the framing and scope of new regulatory interventions need to increasingly recognize that the expertise necessary to characterize emerging sociotechnical risks is rooted not only in those who design or regulate these technologies, but in the day-to-day operational expertise developed among those who deploy, operate, and manage them on the ground.

It is otherwise important to note that standards bodies and other experts in security engineering stress the importance of cybersecurity-related design processes being concerned not only with software programming and other technical measures but with a large variety of safeguards that ultimately reflect a particular organisational (and not just technical) framework. In this regard, the seminal report of the US Defense Science Board Task Force on Computer Security referenced near the start of this article, presciently noted over fifty years ago that ‘software safeguards alone are not sufficient’ to deliver ‘comprehensive security’; rather, a ‘combination of hardware, software, communications, physical, personnel and administrative-procedural safeguards is required’. Similarly, the ISO and IEC emphasise that ‘the information security that can be achieved through technical means is limited, and can be ineffective without being supported by appropriate management and procedures within the context of an ISMS [Information Security Management System]’).

5.6 Design Semantics in Law

How do the aforementioned facets of design semantics play out in the legal dimension, particularly with respect to the legislative instruments that incorporate the SbD mantra? In the EU legal context, an important preliminary point is that care must be taken not to uncritically adopt an interpretation of the term ‘design’ that is rigidly locked to its connotations in the English language. Looking across the various language versions of EU legislative instruments that expressly flag SbD, a variety of terms being used as the equivalent of design can be found, and their meanings do not always map closely with the English term. For example, the term used to denote SbD in the Danish version of recital 12 CA connotes embedment or integration (‘indbygget sikkerhed’), the French version connotes conception (‘sécurité dès le stade de la conception’), while the German version connotes both integration and conception (‘konzeptionsintegrierte Sicherheit’). Furthermore, the Danish and German versions seem to highlight an overall result, whereas the French version seems to highlight a stage or step occurring at the start of a process. Nevertheless, all of these variations point more or less to a process of deliberate, up-front planning. The legislative context in which they appear provides enough common ground across the language versions to neutralise the sorts of linguistic discrepancies just highlighted. The legally operational meaning of the terms ‘design’ and ‘by design’ is provided not so much by the terms in themselves as by the surrounding legislative text. The latter provides, in effect, the parameters outlined above by Ralph and Wand—that is, the specification(s) of objects, agents, components, requirements and constraints for the processes concerned. This is especially the case with legal instruments that do not expressly reference SbD or otherwise utilise the term ‘design’, but nevertheless embody the thrust of the SbD mantra.

A second important preliminary point is that there is a paucity of case law on the meaning and reach of legislative provisions that expressly or implicitly manifest SbD ideals. While technical standards bodies and regulatory authorities have issued a considerable amount of guidance and administrative orders, these do not have the same legally definitive status as judicial decisions. However, a case pending before the CJEU is likely to cast authoritative light on how Article 32 GDPR operationalises SbD ideals. This case may also cast light on the reach of similar provisions in other EU legislation. In the meantime, the parsing of the legal dimensions of the ‘by design’ element of the SbD mantra should proceed with caution.

The analysis in Section 5.5 indicates that SbD ought not to be neatly boxed in, conceptually or chronologically. Schneier has famously remarked that ‘security is not a product, but a process’. Similarly, SbD ought to be regarded as a recurrent, evolutive and relatively open-ended process rather than a one-off step. EU legislation provides fairly clear manifestation of this view. For example, the Cybersecurity Act states that ‘[s]ecurity should be ensured throughout the lifetime of the ICT product, ICT service or ICT process by design and development processes that constantly evolve to reduce the risk of harm from malicious exploitation’ (recital 12; emphasis added). Similarly, the GDPR hints at the ongoing nature of SbD when it requires—as one of the ‘technical and organisational measures’ for security under Article 32—a ‘process for regularly testing, assessing and evaluating the effectiveness’ of these measures (Article 32(1)(d)). Further, the references to ‘state of the art’ in the incipits of Articles 32(1) and 25(1) GDPR necessitate regular revisiting of security practices in light of changing perceptions as to which technical and organisational standards provide optimal security. The same can be said of, inter alia, Articles 14(1) and 16(1) NISD, Article 40(1) ECC and Annex I of the Medical Devices Regulation, which also use ‘state of the art’ as a touchstone. In contrast, the Californian legislation on ‘security of connected devices’ presented in Section 3.6 (hereinafter ‘Californian legislation’) is devoid of this dynamic approach, at least on its face.

Another point that can be safely drawn from the analysis in Section 5.5 is that SbD ought to encompass a range of design measures that include, yet go beyond, the strictly technical. In EU legislation, SbD is, on the whole, clearly pitched as embracing both ‘technical and organisational measures’. OECD regulatory policy takes the same approach. Exactly what measures fall within this bipartite category is not entirely clear. This is hardly surprising; it would be unrealistic and undesirable to attempt an exhaustive, let alone precise specification of measures in legislation. This is especially true when legislation is given a broad field of coverage. Even if legislation were to be narrowly scoped, other challenges remain, such as ensuring that a precisely delineated list of measures is sufficiently ‘future-proofed’. Hence, the legislative specification of measures tends (and ought) to be relatively generic in form. That said, it is unfortunate that some such specifications are not formulated as measures in a strict sense—that is, as intentional actions—but rather capabilities. This is the case, for instance, with two of the exemplifications of ‘measures’ listed in Article 32(1) GDPR: ‘the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services’ (Article 32(1)(b)) and ‘the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident’ (Article 32(1)(c)).

The phrase ‘technical and organisational measures’ in EU law undoubtedly signals an intention to cover a potentially large variety of measures and to be applied flexibly. In other words, ‘technical and organisational’ is to be construed liberally—a fortiori when construed in the context of furthering protection of a fundamental right, such as privacy or data protection. Generally speaking, technical measures directly concern, and are often executed in, the mechanics or workings of devices, objects, systems or processes. While frequently embodied in software, hardware and other artefacts, technical measures may go beyond them. They thereby encompass more than what is often commonly understood as ‘technology’, although the latter term can also be defined expansively as any processes and methods for enhancing humans’ capacity to execute tasks, thus making ‘technical’ and ‘technology’ roughly commensurate with each other. Given the broad, generic sense of ‘technical’ just laid out, organisational measures will inevitably have a technical dimension in their operationalisation. This suggests that ‘technical’ and ‘organisational’ should not be seen as two entirely different and separate criteria. The chief distinction between them is that organisational measures directly concern the environment for the development and deployment of technical measures. More specifically, organisational measures primarily involve the assignment and management of roles, duties or tasks in connection with such development or deployment, typically within the aegis of a collective entity (corporation, state agency, etc). Security incident response management, staff training programmes on security procedures, and managerial decisions over who has responsibility for various security-related initiatives are obvious examples.

Any measure that appreciably helps promote security is likely to be regarded as capable of being embraced by the phrase ‘technical and organisational’, even if it fits awkwardly within the ordinary literal meaning of the phrase. This would be the case, for instance, with certain economic measures (eg budgetary allocations for security spending) or particular legal measures (eg drafting terms and conditions for the use of cloud computing services) inasmuch as these have a fairly direct bearing on the security of information systems and their contents. The requirements of DPbDD pursuant to Article 25 GDPR operate similarly, albeit across a wider range of data protection mechanisms. However, as measures’ relevance for ensuring security decreases, so too does the likelihood that they are required. Precisely which measures apply will depend largely on context-calibrated methodologies constituting the ‘state of the art’, particularly as elaborated in soft law standards drawn up by security experts. This may also be the case under legislative frameworks that are not directly linked to the protection of fundamental rights. In line with such standards, the measures should typically form, and be applied as, constituents of an overarching ISMS that governs the entirety of the IS lifecycle. And they may variously apply before, during and after a security breach.

Unfortunately, not all SbD-related rules take such an expansive approach. The design requirements of the Californian IoT security legislation appear to be pitched predominantly, if not exclusively, at technical measures. The same seems to apply with the provisions of the EU Cybersecurity Act on ‘security by default’ (see recital 13, set out in Section 3.1). However, this shortcoming in respect to the latter provisions is not as problematic as it is for the Californian legislation, because the ‘by default’ requirements are coupled to, and a by-product of, the more expansive ‘by design’ requirements elaborated in recital 12 of the Cybersecurity Act.

5.7 Ambition

An important legal issue concerns the degree of ambition that law demands of the design process. This issue can be broken down into two main questions: What level of security is required and how much effort is necessitated? The answers will partly depend on the normative status of the referent object at stake: if that object is a fundamental right or closely linked to such a right, the requisite level and effort will be relatively high. Much the same will pertain if cybersecurity becomes a fundamental right in itself.

Nonetheless, it is generally assumed that complete security for information systems is well-nigh unattainable. Indeed, this is part of what makes achieving security a ‘wicked problem’. Legal references to security must therefore typically denote something less than complete security. With respect to Article 32 GDPR, for example, there is broad agreement that the provision constitutes an obligation of means, rather than result, in the sense that a security breach as such will not necessarily amount to an infringement of the legislation; rather, an infringement will only arise if the controller or processor has failed to take the measures required by Article 32. A similar standard most likely applies in respect of other equivalent requirements, such as Articles 14 and 16 NISD.

In the context of Article 25 GDPR, the European Data Protection Board duly opines that ‘[e]ffectiveness is at the heart of the concept of data protection by design’. Accordingly, the Board adds that the paramount objective of DPbDD is ‘the effective implementation of the principles and protection of the rights of data subjects into the appropriate measures of the processing’. The same may be said for Article 32 GDPR with respect to security measures. Article 32 is not about symbolic measures; rather, it is part of a concerted effort to ensure that ‘law in books’ becomes ‘law in practice’. In other words, Article 32 GDPR should be understood as going well beyond a soft paternalism that simply encourages thought to be given to IS security without requiring a practicably significant enhancement of security. This point is buttressed by the jurisprudence of the ECtHR in I v Finland and of the CJEU in Digital Rights Ireland presented in Section 3.4.

At the same time, the requisite security effort and levels are tempered by contextual factors. The incipit of Article 32(1) GDPR is a central example:

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, …

As Burton notes, this phrasing imports a criterion of proportionality, which requires active consideration of whether the security measures are reasonably likely to achieve their aims and whether any detrimental effects they incur on competing legitimate interests are justified. Hence, the security standard required by SbD as a legal norm must be, in effect, one that is the result of best reasonable effort.

This is not obvious from the face of all the legislative instruments canvassed herein. For example, the above-cited recitals in the preamble to the EU Cybersecurity Act are, prima facie, relatively stringent and seem to operate with a ‘best effort’ standard. Recital 12 CA refers to protecting the security of ICT products, ICT services or ICT processes ‘to the highest possible degree, in such a way that the occurrence of cyberattacks is presumed and their impact is anticipated and minimised’. The same seems to apply in respect of the ‘security by default’ dimension which refers to the ‘most secure settings possible’ (recital 13 CA). However, the references in the CA recitals to what is ‘possible’ should be read in the light of what is reasonably possible. This follows from the provisions of the GDPR and other central EU laws that operationalise SbD and point to a broad range of factors for assessing appropriateness/reasonableness. The Cybersecurity Act must be construed in light of these provisions. The Californian legislation also seems to operate with a reasonableness criterion inasmuch as it expressly refers to a ‘reasonable’ security feature (§1798.91.04(a)).

The landmark judgment of the ECtHR in I v Finland presented in Section 3.4 deserves mention in this context. It will be recalled that the Court formulated a state’s obligation to provide security such as to ‘exclude any possibility of unauthorised access’ to hospital patient data. On its face, this is rather absolutist phrasing. However, the Court’s phrasing should be read in light of, first, the relevant Finnish law at the time which required security measures to be both ‘appropriate’ and de facto, and, second, the Court’s finding ‘that the records system in place in the hospital was clearly not in accordance with’ those legal requirements––a finding that the Court described as ‘decisive’ for its judgment. In light of these factors, the above-cited formulation regarding the exclusion of ‘any possibility of unauthorised access’ seems unnecessarily forceful and misleading, also given that the absent security measure that was centrally germane to the litigation would not have prevented unauthorised access but merely discouraged it. Thus, the judgment should not be read as laying down cybersecurity requirements more stringent than those aimed at ensuring a level of security that is the result of best reasonable effort.

5.8 Sanctions

It is not the intention of this article to delve deeply into the issue of sanctions but some brief remarks are pertinent. Notwithstanding a paucity of comprehensive mapping of the actual behavioural effects of legal sanctions for security breaches, it is reasonable to assume that the bite of SbD is partially dependent on the sanctions that may be incurred in the event of non-compliance with its legal requirements. A hypothetical yet realistic example from the work of Michels and Walden underscores this assumption: ‘a well-intentioned but underfunded IT department could use the risk of sanctions to convince a disinterested, cost-focussed board to give it the budget it needs for additional security measures’.

Sanctions for breach of SbD requirements may flow from a variety of legal frameworks. These extend beyond public law to private law, which may provide remedies under tort or contract. Despite this variety, legal sanctions have often constituted a weak point in cybersecurity regimes, as indicated in Section 4.2. The overarching problem in this regard has been the apparent failure of legal sanctions to incentivise organisations to invest up-front in strong security measures, in a situation where market incentives to do so are generally weak. Regulators’ reluctance to impose appreciable financial penalties for security breaches is part of this problem. Lack of consistency and transparency in the imposition of sanctions is another part. The NISD regime is a case in point. It simply requires penalties to be ‘effective, proportionate and dissuasive’ (Article 21 NISD)—a general principle of EU constitutional law—but otherwise gives Member States considerable leeway to implement sanctions as they see fit. At the same time, the Commission recently observed that ‘Member States have been very reluctant to apply penalties to entities failing to put in place security requirements or report incidents’.

The equivalent provisions in the GDPR are much more comprehensive than Article 21 NISD and arguably constitute the most ambitious sanctions regime in the cybersecurity domain, particularly with regard to the imposition of administrative fines. Such penalties are to be calibrated according to a long list of context-dependant criteria listed in Article 83(2) GDPR, which can be seen as constituents of the overarching requirement that sanctions should be ‘effective, proportionate and dissuasive’ (Article 83(1) GDPR). One criterion is ‘the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented by them pursuant to Articles 25 and 32’ (Article 83(2)(d) GDPR). This helps to operationalise, in effect, SbD (and DPbDD) ideals since appreciable efforts to implement security measures may be potentially rewarded by reductions of the amounts payable as fines.

An intriguing, albeit confusing, feature of the GDPR sanctions regime is that it operates with a concurrently applicable two-tiered system for the maximum amount payable in the case of breaches of the regulation’s security requirements. The GDPR states that a breach of Article 32 will attract a fine of up to 10 million EUR or, in the case of an undertaking, up to two percent of its total worldwide annual turnover, whichever is higher (Article 83(4)). These amounts are doubled when there is a breach of ‘the basic principles for processing’ under Article 5 (Article 83(5) GDPR). As pointed out in previous sections, one such principle is ‘integrity and confidentiality’—ie ensuring ‘appropriate security of the personal data’ (Article 5(1)(f) GDPR). Some commentators misleadingly point only to Article 83(4) with respect to breaches of security requirements, seemingly forgetting the possibility of applying the higher tier. It has also been argued that Article 32 constitutes the lex specialis of Article 5(1)(f) and should therefore take precedence when setting the level of fines for security-related infringements; in other words, the tier for such infringements should be governed by Article 83(4), not Article 83(5). The argument has dubious merit for several reasons. First, while Article 32 overlaps with and embodies Article 5(1)(f), the two are not fully commensurate, as noted in Section 3.3. Second, the regulation clearly stipulates that breach of Article 5 principles attracts the higher tier for levelling fines. There is no indication that the principle of ‘integrity and confidentiality’ is intended to be exempt from this rule, particularly in light of the elevated normative status of security of personal data in the EU constitutional framework. Third, the potential application of both tiers to security infringements does not necessarily create practical problems: the higher level of fine may (and ought to) be applied in cases of egregious or especially negligent behaviour on the part of controllers; the lower tier being reserved for other cases.

Unfortunately, it is extremely difficult to discern a consistent or clear approach from DPAs to how this two-tier system should be operationalised. A large number of fines have been issued for infringements of the GDPR’s security requirements, and the fine levels naturally vary a great deal. In many cases, DPAs have held that both Articles 5(1)(f) and 32 have been breached—which makes legal sense—but the weighting of each type of infringement in the calculation of the level of fine has often been left unelaborated. In one case, the DPA even incorrectly described the two-tier system by stating that Article 83(4) applies to infringements of Article 5. In some cases, breach of Article 32 has been found, without express consideration of possible breach of Article 5(1)(f), even though personal data have likely been insecurely processed. In some other cases, relatively paltry fines have been issued for Article 5(1)(f) infringements, without any mention of possible breaches of Article 32. Viewed as a whole, the pan-European operationalisation of the administrative fines scheme with regard to security-related breaches of the GDPR is far from satisfactory, not least in a ‘rule of law’ perspective. Unfortunately, the same can be said of the scheme’s operationalisation with regard to GDPR breaches generally. A key problem is that the EDPB has not agreed on a detailed methodology for calculating fines, although this is on its current agenda. While the Board’s predecessor has fleshed out the chief criteria for determining fine levels, the result does not go much further than the list provided in Article 83(2). This allows for divergent and often arcane decisional processes.

5.9 Users

The analysis in Section 5.5 points to users of information systems or other technology inevitably playing a role, directly or indirectly, in design processes. Accordingly, a conception of SbD without account taken of this role underplays the multidimensionality of design. The analysis in Section 5.5 also points to user involvement in design processes as being desirable. Disappointingly, the legal instruments canvassed herein generally demonstrate scant recognition of the desirability of user involvement in SbD and even the very fact of such involvement. This is not to say that user involvement is necessarily absent from the legal framework. It may figure, for instance, in the reference to ‘nature, scope, context and purposes of [data] processing’ in Articles 24(1), 25(1) and 32(1) GDPR, along with the follow-up reference to the ‘risks of varying likelihood and severity for the rights and freedoms of natural persons’ in the same provisions. Yet, such flagging of user involvement is oblique, and the user seems to be cast primarily as a potential victim rather than as a valuable source of input into IS design.

6. False Promises and False Flags?

6.1 Existential Difficulties

The SbD mantra assumes that a meaningful level of security can be achieved by design. Does this assumption promise more than can actually be delivered? In a fundamental existential sense, it arguably does, at least if one considers insecurity to lie at the root of the human condition. From such a perspective, SbD could be characterised as overly optimistic; the struggle for security is really a perpetual Sisyphean task of moving a rock between various degrees of vulnerability. A similar criticism could be directed at PbD and DPbDD given the extreme erosion in recent years of the basic societal bedrock supporting privacy.

This sort of perspective is an important point of departure for many proponents of ‘cyber resilience’ as an overarching goal for IS development. For these people, insecurity is a fundamental, inescapable element of life; it is the rule, not the exception.

[T]he concept of resilience essentially treats adverse cyber events as a part of normal operations. The difference to the concept of security can therefore be crucial—it allows organizations to incorporate counter measures and contingency plans as a part of what could be considered as this new ‘normal’ condition.

In this view, cyber resilience is a more realistic approach to coping with threat than cybersecurity. A corollary of this view would be that ‘resilience by design’ should receive greater prominence than SbD in the legislative limelight. It is beyond the scope of this article to delve into the strengths and weaknesses of this view—that is a topic for another article. Drawing on the latter, it suffices to note here that cyber resilience is an elusive concept whose goals are not necessarily significantly simpler to realise than those of cybersecurity. Björck and others, for example, pitch cyber resilience as ‘the ability to continuously deliver the intended outcome despite adverse cyber events’. They also pitch it as being concerned with designing information systems ‘to fail in a controlled way’. These are extremely ambitious goals. Their realism in many contexts is, at the very least, questionable.

To people who are unfamiliar with cybersecurity-related challenges, SbD in the abstract may well promise more security than can actually be delivered. Experts in security engineering and IS design are less likely to be deceived. Many of them appreciate the ‘wicked’ nature of the basic problem that the SbD mantra is intended to address. They also understand that the promised security is neither absolute nor permanent but rather a hard-won, contingent, transitory and imperfect result. Several of the items of legislation canvassed in the previous sections suggest that law makers in some jurisdictions share this understanding.

6.2 Governance Challenges

Existential difficulties aside, other problematic assumptions inhere in the notion of SbD and much of the broader ‘by Design’ (bD) discourse. That discourse—at least in regulatory contexts—tends to operate within a positivist paradigm focusing on ‘artefacts’ that are of a predominantly technical nature and that are separate to, or distinct from, yet controllable by, the humans that create them. A similar characterisation has been made of the field of design science. Much of the bD discourse also assumes that information systems can be built and governed along predictable paths. However, such systems often end up being used in ways beyond what designers are able to predict or model. Amara’s law is apposite here: ‘We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run’.

In reality, information systems are rarely complete, monolithic and static with clearly delineated borders; rather, they are frequently amorphous, incomplete and evolving organisational–technological structures with unclear lines of control. The internet is a prime example of this; hence, its governance has given rise to myriad challenges and considerable conflict. Moreover, perceptions of the condition, parameters or direction of a given IS may differ. How those formally tasked with the design of a system conceptualise its functionalities and aims can often be at odds with how users conceptualise them and with how the system ends up being used. This underlines the need to ensure user involvement in IS design.

Account must also be taken of software programming culture. In recent years, ‘agile’ or ‘evolutionary’ software development has become popular. This represents a movement from the relatively regimented ‘waterfall’ style of developing software—with neat, distinct, chronologically ordered and unidirectional stages of development accompanied by comprehensive documentation—to a more fluid development style characterised by a ‘no design up front attitude’. The ‘agile turn’ goes hand in hand with a ‘minimum viable product’ approach that focuses on delivering a product that is just good enough to work but nothing more. It also goes hand in hand with a turn to modularity, whereby one constructs ‘a complex product or process from smaller subsystems that can be designed independently yet function together as a whole’. All of these trends foster the atomisation of information systems and the design processes directed at their creation. This introduces further difficulties in realising ‘by design’ ideals that assume an ability to shape IS development along desired linear paths.

I am not suggesting that the turns to agility and modularity bring the SbD enterprise to its knees. That enterprise involves more than software programming, and security needs can be catered for to some degree within an evolutionary, modular framework. Moreover, care must be taken to ensure that software programmers understand that SbD mandates are not hemmed in or defined by the ‘waterfall’ model, which as noted in Section 5.5, tends to tether design to a particular stage of software development. Nonetheless, the waterfall model has clear benefits for the SbD enterprise:

The strengths of the waterfall model are that it compels early clarification of system goals, architecture and interfaces; it makes the project manager’s task easier by providing definite milestones to aim at; it may increase cost transparency by enabling separate charges to be made for each step, and for any late specification changes; and it’s compatible with a wide range of tools. Where it can be made to work, it’s often the best approach.

Writing about the history of Microsoft’s security engineering methods, Anderson states: ‘It’s telling that the biggest firm pushing evolutionary development reverted to a waterfall approach for security’.

Another challenge that is somewhat related to software programming culture but afflicts the engineering community more generally, concerns the standing of SbD ideals amongst computer engineers. There is considerable literature on the difficulty of getting engineers to embrace PbD ideals. It might be expected that ‘security’, with its relatively technocratic character, has greater resonance amongst engineers than ‘privacy’ has. However, evidence suggests that SbD faces a similar struggle to gain widespread traction in the engineering community. A recent survey by Spiekermann and others showed that while ‘the vast majority of engineers are aware that they should be pursuing privacy and security by design’, a large percentage of the respondents (40 percent) felt that it was not their responsibility to integrate privacy and security into their work and did not find pleasure in doing so. Other factors were also at play, particularly lack of time and the culture of the organisation in which an engineer works.

6.3 Legitimacy Challenges

Currently, the SbD enterprise is generally regarded as valuable and desirable. Apart from doubt over the ability to realise its ideals, the enterprise attracts little criticism. At the same time, SbD is tied to the concept of security which has a range of referent objects that may be politically or economically charged. National security and national sovereignty are two prominent examples. Both are regularly invoked to justify the extraordinary exercise of power, often at the expense of civil liberties. Both are central elements of securitisation. While SbD as such has yet to be invoked in contexts in which national security or national sovereignty are the dominant referent objects, securitisation is definitely shaping the development of cybersecurity praxis and policy more generally. In the long term, it will be difficult to keep SbD separated from policy processes over which national security and national sovereignty hold sway. The ongoing discussions around security of 5G networks illustrate this difficulty well. A problem is that when national security becomes the predominant driver of the push for SbD there is an increased risk of SbD being used to strengthen state interests in an authoritarian or semi-authoritarian way, thus calling the legitimacy of SbD seriously into question.

Government actors are not the only potential problem for the legitimacy of SbD. The actions of private corporations also need to be kept under scrutiny. Corporations could use SbD as a pretext for protecting their commercial interests at the expense of the legitimate interests of others. An indication of this potential is the clash over how much power producers should be given to determine how their products are used after being sold to consumers and other end-users. When wielding and justifying this power, producers can easily fly the flag of ‘security’. This has happened, for example, with Sony. In 2010, the company implemented an update to the operating system of its Playstation 3 consoles which took away a particular feature (‘OtherOS’) allowing users to install the Linux operating system and thereby employ the consoles for a variety of purposes beyond gaming. Sony justified the update by pointing to security concerns and the need to combat digital piracy, but many console users saw the move as essentially motivated by Sony’s commercial proprietary needs and instigated litigation. Similarly, printer manufacturers have cited security fears as a reason for installing smart chips in their ink cartridges in order to ensure that only those cartridges can be accepted by their printers. The fears are allegedly based on the potential for hackers to exploit vulnerabilities in smart chips from other ink cartridge manufacturers. Again, deeper, commercially grounded proprietary concerns are probably at play, with the security fears being used to ‘whitewash’ (or, more accurately, ‘security-wash’) those concerns.

7. Conclusions

This article shows how SbD has gone from being simply a technical engineering standard to becoming entrenched as a hard law norm, particularly within the EU legal framework. For the processing of personal data, SbD serves as a fully fledged regulatory principle inhering not just in EU secondary legislation but also in the EU’s constitutional fabric. It does so hand-in-hand with the mantras of DPbDD and, to a lesser extent, PbD. As for the processing of non-personal data, SbD is not yet a proper hard law principle with a broad horizontal scope in the EU regulatory system. It is, however, close to reaching this status.

The normative strengthening of SbD engendered by these developments is welcome, particularly given the lack of strong pre-existing economic and legal incentives to implement robust security measures. It signals a serious effort by law makers to ensure that cybersecurity requirements laid down in law get substantial practical traction in the building and deployment of information systems and other tools. In other words, the fundamental legislative agenda here concerns the bridging of the traditional divide between ‘law in books’ and ‘law in practice’.

The legal push to promote SbD in the EU regulatory system has not been as coherent as it could have been. The EU has promoted SbD—often twinned with DPbD and PbD—through a variety of legislative instruments with the aim of creating a set of interlocking, mutually reinforcing codes, some of relatively general application but most with narrower sectoral scope. The result has been less than optimal with regard to regulatory consistency, simplicity and clarity. Reform of particular sets of rules (such as those dealing specifically with IS security) has sometimes been out of kilter with other sets (such as those dealing with product liability and critical infrastructure). Definitions of security properties have differed from one rule set to another, often without a clear rationale for this variation. Fortunately, there are signs that law makers are increasingly aware of these shortcomings and increasingly ready to break down older regulatory ‘silos’. The most recent EU reform measures evidence an approach to cybersecurity that is holistic, flexible, adaptable, iterative and effective, with SbD as a key ingredient.

However, any assumption on the part of legislators or other regulatory authorities that rolling out the SbD mantra in law will be a fairly frictionless, orderly and sure-footed way of achieving cybersecurity is foolish. When parsing the various dimensions of the mantra, we strike complexity and fuzziness. When comparing the assumptions (implicit if not explicit) of SbD with the realities of software engineering practices and IS development, we strike a considerable degree of misalignment. When considering the plethora of soft law norms related to SbD, we struggle to find a generally agreed, unified methodology. When looking for judicial guidance on how to operationalise SbD as a legal mandate, we strike, to a large extent, ‘air’. And when looking for guidance from other regulatory authorities on the mantra’s operationalisation as hard law norm, we find a considerable degree of inconsistency and opacity (eg with regard to the imposition of legal sanctions for non-compliance with the GDPR’s requirements). Those who are engaged in promoting SbD and developing methodologies for its implementation must confront these challenges rather than pretend they do not exist. The legal instruments that flag SbD ideals so far seem largely to ignore them. This is hardly conducive to ensuring that SbD gains real traction in IS engineering and similar processes.

Some of these challenges will be ameliorated through the development of case law, the continued refinement of SbD engineering methodologies and the development of new methodologies (eg for calculating administrative fines under the GDPR). Yet, it would be unreasonable to expect a very high degree of clarity to ever emerge. If the iterative and flexible character of sensible design is to be honoured, any SbD requirement—whether hard law or soft law—must be formulated in a relatively generic, function-focused manner, also with an eye to its ‘future proofing’. This will inevitably make compliance with SbD as a legal mandate relatively challenging to describe and measure.

At the same time, legal instruments mandating SbD must make its iterative character abundantly clear. They must accordingly highlight more than the initial design phase involved in building an information system; they must also highlight the multiple secondary phases (eg those concerned with IS-user interaction, IS maintenance and IS updates). Criticism has been made of the field of design science generally for not properly addressing secondary design phases. A similar criticism can be made of regulatory instruments. While some instruments (such as the EU Cybersecurity Act) seem cognisant of at least some of these phases and of the ongoing nature of design, others (such as the Californian connected devices legislation) do not. Even the most progressive legislation in this respect underplays the need for regular dialogue between designers and users. Going forward, greater legislative effort ought to be put into promoting ‘participatory design’ and Brass and Sowell’s vision of ‘adaptive governance’ within the SbD context.

Finally, the legal push for SbD must be mindful of the potential for SbD ideals to be manipulated in ways that could undermine their legitimacy in the long term. Regulators ought accordingly to be mindful not just of the potential uses of SbD but also of its potential abuses. In this regard, the following recommendation of the OECD is particularly pertinent to remember:

Organisations should be aware that adoption of digital security measures which undermine human rights and fundamental values constitutes a risk to their image and credibility, and involves their legal responsibility. They should take advantage of the systematic nature of the digital risk management cycle to assess the impact of their security risk management decisions on human rights and fundamental values and adjust them as appropriate.

Acknowledgments

Work on this article was predominantly conducted under the aegis of the research project ‘Security in Internet Governance and Networks: Analysing the Law’ (SIGNAL), funded by the Research Council of Norway and UNINETT Norid AS (grant number 247947). Thanks go to these institutions for support. The research project ‘Governance of Health Data in Cyberspace’ (CyberHealth), funded by Nordforsk (grant number 81105) has also provided a basis for some of the work. Thanks go additionally to the 3A Institute at the Australian National University (for providing a congenial base for much of the initial research for the article), to Florent Thouvenin, Rolf H Weber and the Center for Information Technology, Society and Law at the University of Zurich (for the opportunity to test some of the ideas advanced in the article), to Michael Birnhack and the Buchmann Faculty of Law at Tel Aviv University (for providing a stimulating environment for much of the article’s finalisation) and to my colleagues at the Norwegian Research Center for Computers and Law—especially Luca Tosoni and Arild Jansen—who commented on previous drafts. Live Sunniva Hjort provided excellent editorial assistance during the closing stages of work on the article. The usual disclaimer nonetheless applies.

  • 1
    See Article 25 of the European Union’s General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1. The provisions of Article 25 are described in Sections 3.1 and 3.3. Unless otherwise noted, all references to legal instruments are to the instruments’ amended state as of 31 December 2021. This is the same date as when all cited URLs were last visited.
  • 2
    See eg European Union Agency for Network and Information Security, ‘Review of Cyber Hygiene Practices’ (December 2016) 14 (‘Cyber hygiene is a fundamental principle relating to information security and, as the analogy with personal hygiene shows, is the equivalent of establishing simple routine measures to minimise the risks from cyber threats’).
  • 3
    See Section 4.3.
  • 4
    Karen Yeung, ‘Design for the Value of Regulation’ in Jeroen van den Hoven, Pieter E Vermaas and Ibo van de Poel (eds), Handbook of Ethics, Values and Technological Design (Springer 2015) 447, 449. See also Karen Yeung, ‘Towards an Understanding of Regulation by Design’ in Roger Brownsword and Karen Yeung (eds), Regulating Technologies (Hart Publishing 2008) 79-94.
  • 5
    See Sections 2 and 3.
  • 6
    A rare example is Eldar Haber and Aurelia Tamò-Larrieux, ‘Privacy and Security by Design: Comparing the EU and Israeli Approaches to Embedding Privacy and Security’ (2020) 37 Computer Law & Security Review 105409 <https://doi.org/10.1016/j.clsr.2020.105409>. However, their article does not delve deeply into the research questions taken up in this article.
  • 7
    Two central examples are the doctoral treatises of Axel Arnbak and Aurelia Tamò-Larrieux. See Axel M Arnbak, Securing Private Communications: Protecting Private Communications Security in EU Law – Fundamental Rights, Functional Value Chains and Market Incentives (Wolters Kluwer 2016); Aurelia Tamò-Larrieux, Designing for Privacy and its Legal Framework: Data Protection by Design and Default for the Internet of Things (Springer 2018).
  • 8
    Horst WJ Rittel and Melvin M Webber, ‘Dilemmas in a General Theory of Planning’ (1973) 4 Policy Sciences 155 <https://doi.org/10.1007/BF01405730>.
  • 9
    ibid 160.
  • 10
    ibid 159.
  • 11
    Herbert A Simon, The Sciences of the Artificial (3rd ed, MIT Press 1996) 28, 119ff.
  • 12
    Willis H Ware, Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security (Rand Corporation 1970) iv.
  • 13
    Id.
  • 14
    See seminal work by Jerome H Saltzer and Michael D Schroeder, ‘The Protection of Information in Computer Systems’ (1975) 63(9) Proceedings of the IEEE 1278, 1282ff <https://doi.org/10.1109/PROC.1975.9939>.
  • 15
    See Richard E Smith, ‘A Contemporary Look at Saltzer and Schroeder’s 1975 Design Principles’ (2012) 10(6) IEEE Security & Privacy 20 <https://doi.org/10.1109/MSP.2012.85>.
  • 16
    See Chad Dougherty and others, ‘Secure Design Patterns’, Technical Report CMU/SEI-2009-TR-010 (Carnegie Mellon University 2009) and references cited therein.
  • 17
    Joanna CS Santos, Katy Tarrit and Mehdi Mirakhorli, ‘A Catalog of Security Architecture Weaknesses’, 2017 IEEE International Conference on Software Architecture (ICSA) (IEEE 2017) 220-23 <https://doi.org/10.1109/ICSAW.2017.25>.
  • 18
    Available at <https://www.iot.org.au/wp/wp-content/uploads/2016/12/IoTAA-Security-Guideline-V1.2.pdf>. The IoTAA is the peak IoT industry body for Australia.
  • 19
    ENISA, ‘Baseline Security Recommendations for Internet of Things in the Context of Critical Information Infrastructures’ (November 2017).
  • 20
    ibid 47. ENISA has since elaborated SbD in other technical–industrial standards, such as a set of ‘good practices’ relating to ‘Industry 4.0’: see ENISA, ‘Good Practices for Security of Internet of Things in the context of Smart Manufacturing’ (November 2018) 37. Building partly on ENISA’s work, the European Telecommunications Standards Institute (ETSI) embraced SbD as a guiding principle for its recent standards dealing with connected consumer products: see ETSI, ‘Cyber Security for Consumer Internet of Things: Baseline Requirements’ (ETSI EN 303 645 V2.1.1 (2020-06)) 5 (‘Security by design is an important principle that is endorsed by the present document’).
  • 21
    European Commission, ‘Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace’ (JOIN(2013) 1 final).
  • 22
    ibid 13.
  • 23
    European Commission, ‘Resilience, Deterrence and Defence: Building Strong Cybersecurity for the EU’ (JOIN(2017) 450 final) 5.
  • 24
    European Parliament resolution of 3 October 2017 on the fight against cybercrime (2017/2068(INI)) para 39. See also para 26 (‘regulation should play a greater role in managing cybersecurity risks through improved product and software standards on design and subsequent updates, as well as minimum standards on default usernames and passwords’).
  • 25
    European Commission (n 21) 5, 12.
  • 26
    Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union [2016] OJ L 194/1.
  • 27
    European Commission, ‘The EU’s Cybersecurity Strategy for the Digital Decade’ (JOIN(2020) 18 final) 5.
  • 28
    Council of the European Union, ‘Council Conclusions on the Cybersecurity of Connected Devices’ (13629/20, 2 December 2020) espec paras 3 and 13.
  • 29
    ANEC and BEUC, Cybersecurity for Connected Products: Position Paper (ANEC-DIGITAL-2018-G-001final – BEUC-X-2018-017 07/03/2018) 6.
  • 30
    UK Department for Digital, Culture, Media & Sport, Secure by Design: Improving the Cyber Security of Consumer Internet of Things (7 March 2018) 16.
  • 31
    ibid 5.
  • 32
    UK Department for Digital, Culture, Media & Sport, Policy Paper: Government Response to the Call for Views on Consumer Connected Product Cyber Security Legislation (21 April 2021).
  • 33
    Norwegian ICT Security Committee, IKT-sikkerhet i alle ledd – organisering og regulering av nasjonal IKT-sikkerhet [ICT security in all levels – organisation and regulation of national ICT security], Norges Offentlige Utredninger [Norway’s Official Reports], Report 14 (2018) 11, 87, 88, 96.
  • 34
    OECD, ‘Recommendation of the Council Concerning Guidelines for the Security of Information Systems’ (26 November 1992).
  • 35
    OECD, ‘Recommendation of the Council Concerning Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security’ (25 July 2002).
  • 36
    This was not the only part of the Guidelines expressly addressing design; it was also addressed under the principle of ‘responsibility’, where the Guidelines stated: ‘Those who develop, design and supply products and services should address system and network security and distribute appropriate information including updates in a timely manner so that users are better able to understand the security functionality of products and services and their responsibilities related to security’. Further, the introduction to the Guidelines states that they ‘signal a clear break with a time when secure design and use of networks and systems were too often afterthoughts’.
  • 37
    See eg OECD, ‘Recommendation of the Council on Digital Security Risk Management for Economic and Social Prosperity’ (17 September 2015); OECD, ‘Recommendation of the Council on Digital Security of Critical Entities’ (11 December 2019).
  • 38
    For instance, the 2015 Recommendation (ibid) references ‘design’ under the umbrella of a general principle of ‘innovation’, which reads as follows: ‘Innovation should be considered as integral to reducing digital security risk to the acceptable level determined in the risk assessment and treatment. It should be fostered both in the design and operation of the economic and social activities relying on the digital environment as well as in the design and development of security measures.’
  • 39
    Federal Trade Commission, Start with Security: A Guide for Business (June 2015).
  • 40
    See eg Federal Trade Commission, Security Check: Reducing Risks to your Security Systems (June 2003).
  • 41
    The RIPD consists mainly of national data protection authorities in Central and South America.
  • 42
    RIPD, ‘General Recommendations for the Processing of Personal Data in Artificial Intelligence’ (21 June 2019) 16.
  • 43
    Committee on the Rights of the Child, General comment No 25 (2021) on children’s rights in relation to the digital environment (UN Doc CRC/C/GC/25; 2 March 2021) para 116. Other parts of the comment also stress the importance of design processes: see paras 39, 55, 62, 70, 77, 80, 91 and 110.
  • 44
    Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L 281/31 (repealed).
  • 45
    Recital 46 DPD stipulated the need to take ‘appropriate technical and organisational measures’ for protection of data subjects’ rights and freedoms, ‘both at the time of the design of the processing system and at the time of the processing itself, particularly in order to maintain security and thereby to prevent any unauthorised processing’. The recital went on to state: ‘these measures must ensure an appropriate level of security, taking into account the state of the art and the costs of their implementation in relation to the risks inherent in the processing and the nature of the data to be protected’. Article 17 DPD contained similar wording. Various sectoral rules have replicated the thrust of these provisions: see eg Article 4(1) of the Electronic Privacy Directive (EPD) (Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector [2002] OJ L201/37) which requires a ‘provider of a publicly available electronic communications service’ to ‘take appropriate technical and organisational measures to safeguard security of its services’.
  • 46
    The term ‘controller’ denotes the entity that determines, or co-determines, the purposes and means of data processing: see Article 4(7) GDPR. For concise explication, see Lee A Bygrave and Luca Tosoni, ‘Article 4(7): Controller’ in Christopher Kuner, Lee A Bygrave and Christopher Docksey (eds), The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020) 145ff.
  • 47
    The term ‘processor’ denotes the entity that processes personal data on behalf of the controller: see Article 4(8) GDPR. For concise explication, see Lee A Bygrave and Luca Tosoni, ‘Article 4(8): Controller’ in Kuner and others (n 46) 157ff. A controller can also be a processor, but in the event that a controller uses another entity to process data, the latter entity shall carry out the processing under the controller’s instructions: see further Articles 28 and 29 GDPR.
  • 48
    The term ‘personal data’ denotes any information relating to an identified or identifiable natural person: see Article 4(1) GDPR. The definition is broad, especially as the EU Court of Justice has held that the relational criterion ‘is satisfied where the information, by reason of its content, purpose or effect, is linked to a particular person’: Case C-434/16, Peter Nowak v Data Protection Commissioner, judgment of 20 December 2017 (ECLI:EU:C:2017:994) para 35. For concise explication, see Lee A Bygrave and Luca Tosoni, ‘Article 4(1): Personal Data’ in Kuner and others (n 46) 103ff.
  • 49
    Similar requirements are laid down in Article 29 of the Law Enforcement Directive (LED) (see Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA [2016] OJ L 119/89), and in Article 33 of the EU Institutions Data Protection Regulation (EUIDPR) (see Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC [2018] OJ L 295/39).
  • 50
    See also Article 20 LED and Article 27 EUIDPR.
  • 51
    See Article 28(1) and recital 81 GDPR.
  • 52
    See also recital 78 GDPR (repeating the thrust of Article 25(1) and adding that ‘producers’ of products, services, and applications that involve processing of personal data ‘should be encouraged’ to take on board Article 25 ideals (even if they are neither controllers nor processors)).
  • 53
    See also Article 20(2) LED and Article 27(2) EUIDPR.
  • 54
    Lee A Bygrave, ‘Data Protection by Design and by Default: Deciphering the EU’s Legislative Requirements’ (2017) 4(2) Oslo Law Review 105, 116 <https://doi.org/10.18261/issn.2387-3299-2017-02-03>.
  • 55
    Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) [2019] OJ L 151/15.
  • 56
    Defined according to three cumulative criteria in Article 5(2): ‘(a) an entity provides a service which is essential for the maintenance of critical societal and/or economic activities: (b) the provision of that service depends on network and information systems; and (c) an incident would have significant disruptive effects on the provision of that service’.
  • 57
    Defined in Article 4(6) as ‘any legal person that provides a digital service’, the latter being ‘online marketplace’, ‘online search engine’ or ‘cloud computing service’ (Annex III; Article 4(5)).
  • 58
    While the phrase ‘security by default’ is herein used without hyphens, recital 12 uses hyphens for ‘security-by-design’. This grammatical inconsistency has no material consequence, but is strange nonetheless. Other language versions differ on this point: eg the German version of recital 12 refers to ‘konzeptionsintegrierte Sicherheit — security by design’.
  • 59
    Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC [2017] OJ L 117/1. Numerous provisions in the regulation flag the importance of medical device design to ensure patient safety and other related interests, with an overarching requirement in Article 10(1) that, ‘[w]hen placing their devices on the market or putting them into service, manufacturers [of such devices] shall ensure that they have been designed and manufactured in accordance with the requirements of this Regulation’ (emphasis added). Information security is one such requirement, with paragraph 17(2) of Annex I to the regulation stipulating: ‘For devices that incorporate software or for software that are devices in themselves, the software shall be developed and manufactured in accordance with the state of the art taking into account the principles of development life cycle, risk management, including information security, verification and validation’. Similar provisions are incorporated in Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU [2017] OJ L 117/176. See Article 10(1) of the regulation and paragraph 16(2) of Annex I to the regulation.
  • 60
    Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services [2019] OJ L 136/1. See Articles 7 and 8 (setting out, respectively, ‘subjective’ and ‘objective’ conformity requirements with which suppliers of digital content or services to consumers must adhere, also in respect of the security of software included in the content or services). See also recitals 42, 47, 48 and 50.
  • 61
    Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on markets in financial instruments and amending Directive 2002/92/EC and Directive 2011/61/EU [2014] OJ L 173/349. See especially Article 66(3) (requiring Member States to ensure that ‘approved reporting mechanisms’ (ARMs) ‘have sound security mechanisms in place designed to guarantee the security and authentication of the means of transfer of information, minimise the risk of data corruption and unauthorised access and to prevent information leakage, maintaining the confidentiality of the data at all times’) (emphasis added). See also the Commission’s Proposal for a Regulation on digital operational resilience for the financial sector and amending Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014 and (EU) No 909/2014 (COM(2020)595 final), particularly Article 8(2) (‘Financial entities shall design, procure and implement ICT security strategies, policies, procedures, protocols and tools that aim at, in particular, ensuring the resilience, continuity and availability of ICT systems, and maintaining high standards of security, confidentiality and integrity of data, whether at rest, in use or in transit’), Article 8(4) (‘financial entities shall design the network connection infrastructure in a way that allows it to be instantaneously severed …’), and Article 14(b) (requiring key European financial regulatory authorities, ‘in consultation with’ ENISA, to draft technical standards that, inter alia, ‘prescribe how the ICT security policies, procedures and tools referred to in Article 8(2) shall incorporate security controls into systems from inception (security by design) …’).
  • 62
    Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC [2014] OJ L 257/73. See especially Article 19(1) (‘Qualified and non-qualified trust service providers shall take appropriate technical and organisational measures to manage the risks posed to the security of the trust services they provide. Having regard to the latest technological developments, those measures shall ensure that the level of security is commensurate to the degree of risk. In particular, measures shall be taken to prevent and minimise the impact of security incidents and inform stakeholders of the adverse effects of any such incidents’).
  • 63
    Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021) 206 final). See Article 15(1) (‘High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle’). Note also other design-focused stipulations in the proposal which require, in effect, logging by design, transparency by design and human oversight by design for high-risk AI systems: see Articles 12(1), 13(1) and 14(1) respectively.
  • 64
    Proposal for a Directive of the European Parliament and of the Council on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148 (COM(2020) 823 final), elaborated in Sections 3.2 and 3.5 of this article.
  • 65
    Proposal for a Regulation of the European Parliament and of the Council on machinery products (COM(2021) 202). See Annex III with new ‘European Health and Safety Requirement’ (EHSR) 1.1.9 (‘Protection against corruption: The machinery product shall be designed and constructed so that the connection to it of another device, via any feature of the connected device itself or via any remote device that communicates with the machinery product does not lead to a hazardous situation. …’) and amendment of EHSR 1.2.1 on safety and reliability of control systems (‘Control systems shall be designed and constructed in such a way that: (a) they can withstand, where appropriate to the circumstances and the risks, the intended operating stresses and intended and unintended external influences, including malicious attempts from third parties to create a hazardous situation’).
  • 66
    Proposal for a Regulation of the European Parliament and of the Council on harmonised rules on fair access to and use of data (Data Act) (COM(2022) 68 final). See especially Article 3(1) (‘Products [ie IoT products] shall be designed and manufactured, and related services shall be provided, in such a manner that data generated by their use are, by default, easily, securely and, where relevant and appropriate, directly accessible to the user’) and Article 30(1)(d) (‘a smart contract shall be protected through rigorous access control mechanisms at the governance and smart contract layers’).
  • 67
    Directive (EU) 2018/1972 of the European Parliament and of the Council of 11 December 2018 establishing the European Electronic Communications Code [2018] OJ L 321/36.
  • 68
    Regulation (EU) 2021/887 of the European Parliament and of the Council of 20 May 2021 establishing the European Cybersecurity Industrial, Technology and Research Competence Centre and the Network of National Coordination Centres [2021] OJ L 202/1.
  • 69
    This applies even to the well-established ‘general principles’ of EU law: see eg Chiara Amalfitano, General Principles of EU Law and the Protection of Fundamental Rights (Edward Elgar 2018) 17-18 (‘the “fluidity” of the notion of “general principle” is elusive and cannot be construed on the basis of clear parameters; neither can it be described as a set of provisions having specific features or satisfying certain requirements set forth in the Treaties. This sometimes implies certain difficulties in their exact identification. As correctly noted in legal literature, they therefore seem to be identifiable and definable more by notions relating to the philosophy of law than by notions pertaining to EU law’).
  • 70
    See eg Ronald M Dworkin, ‘The Model of Rules’ (1967) 35(1) University of Chicago Law Review 14, 23-29; Robert Alexy, ‘On the Structure of Legal Principles’ (2000) 1(3) Ratio Juris 294 <https://doi.org/10.1111/1467-9337.00157>; John Braithwaite, ‘Rules and Principles: A Theory of Legal Certainty’ (2002) 27 Australian Journal of Legal Philosophy 47, 50-52; Stephen R Perry, ‘Two Models of Legal Principles’ (1997) 82 Iowa Law Review 787.
  • 71
    Alexy (n 70). For his seminal work on this point, see Robert Alexy, Theorie der Grundrechte (Suhrkamp 1986) ch 3.
  • 72
    See eg András Jakab, ‘Concept and Function of Principles. A Critique of Robert Alexy’ in Martin Borowski (ed), On the Nature of Legal Principles (ARSP BEIHEFT No. 119) (Steiner 2009) 145-159.
  • 73
    Julian Rivers, ‘The Reception of Robert Alexy’s Work in Anglo-American Jurisprudence’ (2019) 10(2) Jurisprudence 133, 144 <https://doi.org/10.1080/20403313.2018.1519943>.
  • 74
    Armin von Bogdandy, ‘Founding Principles of EU Law: A Theoretical and Doctrinal Sketch’ (2010) 16(2) European Journal of Law 95, 97-98 <https://doi-org/10.1111/j.1468-0386.2009.00500.x>.
  • 75
    See eg Case C-162/97, Criminal Proceedings against Nilsson, Hagelgren and Arrborn, judgment of 19 November 1998 (ECLI:EU:C:1998:554) para 54.
  • 76
    See eg European Parliament, Committee on the Internal Market and Consumer Protection, ‘Draft report with recommendations to the Commission on the Digital Services Act: Improving the functioning of the Single Market’ (2020/2018(INL); 15 April 2020) 12 (referring to the need to establish ‘the principles of transparency-by-design and transparency-by-default’ in relation to advertising and other commercial communications) and 13 (referring to the need to establish ‘the principle of safety and security by default’ in relation to artificial intelligence). For other examples, see Section 4.3.
  • 77
    See recital 54 NIS2D proposal (n 64).
  • 78
    Council of the European Union, Proposal for a Directive of the European Parliament and of the Council on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148 – General Approach (14337/21; 26 November 2021).
  • 79
    See also European Data Protection Board, ‘Guidelines 4/2019 on Article 25: Data Protection by Design and by Default’ (version 2.0; 20 October 2020) para 21.
  • 80
    See also IT Security Association Germany (TeleTrusT) and ENISA, ‘IT Security Act (Germany) and EU General Data Protection Regulation: Guideline “State of the Art” – Technical and Organisational Measures’ (version 1.9_2021-09 EN) 11 (‘The “state of the art” refers to the best performance of an IT security measure available on the market to achieve the legal IT security objective’); Maximilian von Grafenstein, ‘Co-Regulation and the Competitive Advantage in the GDPR: Data Protection Certification Mechanisms, Codes of Conduct and the “State of the Art” of Data Protection-by-Design’ in Gloria González Fuster, Rosamunde van Brakel and Paul de Hert (eds), Research Handbook on Privacy and Data Protection Law: Values, Norms and Global Politics (Edward Elgar 2022) 402, 427 (noting that the ‘state of the art’ requirement under Articles 25 and 32 GDPR ‘obliges data controllers, and in some cases processors, to constantly consider the most effective (i.e. highest) level of protection offered on the market’).
  • 81
    ISO 31000:2018 (Risk management – Guidelines) clause 4.
  • 82
    ISO/IEC 27001:2013 (Information technology – Security techniques – Information security management systems – Requirements) clause 0.1 (‘It is important that the information security management system is part of and integrated with the organization’s processes and overall management structure and that information security is considered in the design of processes, information systems, and controls’). See also ibid, Annex A, Table A.1 (sections A.14.1 and A.14.2). See also eg ISACA (formerly Information Systems Audit and Control Association) Germany Chapter, ‘Implementation Guideline ISO/IEC 27001:2013 – A practical guideline for implementing an ISMS in accordance with the international standard ISO/IEC 27001:2013’ (April 2017) 16 (‘It is decisive that non-functional requirements—and security requirements are non-functional in most cases—are considered from the very beginning and integrated into the planning of projects, products, and systems (so called “security-by-design”)’). The latter guideline’s characterisation of security requirements as ‘non-functional’ is now arguably dated, but the general point it makes in respect of timing remains valid.
  • 83
    This is not to suggest that risk management is a one-off step: it should be iterative and continually evolving as the threat landscape changes, as the ISO and IEC make clear: ISO/IEC (n 82) clause 10.2 (‘The organization shall continually improve the suitability, adequacy and effectiveness of the information security management system’). See also Section 5.6 in this article.
  • 84
    See also Dag Wiese Schartum, Personvernforordningen: En lærebok (Fagbokforlaget 2020) 265 (emphasising the proactive thrust of Article 32 measures).
  • 85
    EDPB (n 79) para 34. At the risk of spelling out the obvious, the EDPB is composed of the heads of the national data protection authorities (DPAs) of EU Member States, together with the European Data Protection Supervisor, and is established under Article 68 GDPR. Its primary remit is providing advice, guidance and recommendations in order to ensure the GDPR’s ‘consistent application’ (Article 70(1) GDPR).
  • 86
    ibid para 85.
  • 87
    See also eg Article 4(1)(f) LED and Article 4(1)(f) EUIDPR.
  • 88
    See also recitals 78 and 108 GDPR, and Article 4(1)(f) of both LED and EUIDPR.
  • 89
    See eg recital 48 DCD. Cf Article 12(3)(1) eIDAS Regulation (referring to the ‘principle of privacy by design’).
  • 90
    Appl no 20511/03.
  • 91
    ibid para 47. In a judgment handed down shortly afterwards, the ECtHR again emphasised the need to secure ‘practical and effective protection’ of a person’s right under Article 8(1) in respect of internet-related conduct, although the case did not deal directly with cybersecurity issues: see KU v Finland, appl no 2872/02 (2008) especially para 49.
  • 92
    See Arnbak (n 7) 81; Bygrave (n 54) 111.
  • 93
    S and Marper v United Kingdom , appl nos 30562/04 and 30566/04 (2008) para 99; PN v Germany, appl no 74440/17 (2020) para 62. I am indebted to Luca Tosoni for pointing me to these aspects of the two judgments.
  • 94
    Article 52(3) basically states that rights in the Charter shall have the same meaning and ambit as corresponding rights in the ECHR, though without preventing Union law from providing more extensive protection than under the ECHR.
  • 95
    Case C-400/10 PPU, McB v LE, judgment of 5 October 2010 (ECLI:EU:C:2010:582) para 53.
  • 96
    Joined Cases C-293/12 and C-594/12, Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources and Others and Kärntner Landesregierung and Others, judgment of 8 April 2014 (Grand Chamber) (ECLI:EU:C:2014:238) para 40. See also Opinion 1/15 in which the CJEU found that an agreement between the EU and Canada for sharing airline passenger records did not infringe the essence of the right to data protection in Article 8 CFREU because the agreement contained ‘rules intended to ensure, inter alia, the security, confidentiality and integrity of that data, and to protect it against unlawful access and processing’: Opinion 1/15 of 26 July 2017 (Grand Chamber) (ECLI:EU:C:2017:592) para 150.
  • 97
    Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC [2006] OJ L 105/54 (repealed).
  • 98
    Digital Rights Ireland (n 96) para 66 (emphasis added).
  • 99
    ibid para 67.
  • 100
    Arnbark (n 7) 87.
  • 101
    Judgment of 27 February 2008 (First Senate), 1 BvR 370/07 (ECLI:DE:BVerfG:2008:rs20080227.1bvr037007) paras 166 et seq.
  • 102
    ibid para 204. This concept of integrity seems wider than the way it is typically framed in technical literature on ICT security, which tends to pitch ‘integrity’ in terms of preventing unauthorised alteration or modification of information systems and their contents: see Section 5.2 of this article. The Court’s concept of ‘information technology system’ seems intentionally diffuse and covers a broad range of data-processing devices that, if accessed, could reveal a significant amount of information about their users’ lives or personalities: see Wiebke Abel and Burkhard Schafer, ‘The German Constitutional Court on the Right in Confidentiality and Integrity of Information Technology Systems – a case report on BVerfG, NJW 2008, 822’ (2009) 6(1) SCRIPTed 106, 119ff.
  • 103
    Article 6 DPD did not contain such a principle in its list of core principles, instead setting down security requirements in a separate provision (Article 17), which was also placed in a different section (section VIII) to Article 6. This was surprising given that the Council of Europe’s Data Protection Convention of 1981 (Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (European Treaty Series (ETS) No 108)), which served as a central template for the DPD, places its provisions on security (see Article 7) in its Chapter II (titled ‘Basic principles for the protection of personal data’). Thus, security of personal data is undoubtedly treated as a core principle under the Convention. It is also a key constituent of the ‘Fair Information Practice’ Principles (FIPPs) that have been an influential point of departure for the development of data protection law generally, starting in the early 1970s. See eg Arnbak (n 7) 24; Lee A Bygrave, Data Privacy Law: An International Perspective (Oxford University Press 2014) chs 1, 5 and references cited therein.
  • 104
    Note, however, that Article 5(1)(f), like aforementioned paragraph 66 of the CJEU’s judgment in Digital Rights Ireland (n 96), refers to ‘integrity and confidentiality’ rather than ‘confidentiality and integrity’, which is the sequence of the fundamental right as formulated in the German judgment. The latter sequence is more in conformity with the ‘CIA’ nomenclature of security discourse (presented in Section 5.2 of this article) and thus makes sense in that light, whereas the ‘tweaking’ of this sequence in Digital Rights Ireland is curious, if not surprising. The fact that Article 5(1)(f) repeats that tweaking rather than the sequence formulated by the Bundesverfassungsgericht and security literature generally (minus the ‘A’ for availability, elaborated in Section 5.2 of this article), suggests that Digital Rights Ireland also helped shape the formulation of Article 5(1)(f). This suggestion is reinforced by the fact that the terms ‘integrity and confidentiality’ were added to the legislative text during the trilogue negotiations in November 2015—ie after the publication of the judgment in Digital Rights Ireland.
  • 105
    The European Commission cited the judgment in its 2010 Communication, ‘A comprehensive approach on personal data protection in the European Union’ (COM(2010) 609 final; 4 November 2010) 5 (footnote 14). While the Commission did not directly state that the judgment would necessitate adopting a new principle in the EU data protection regime, it indicated that ‘additional measures under Union law’ may be required in ‘cases where the confidentiality and integrity in [sic] information-technology systems must be ensured’, and referred to the judgment in respect of such cases. Further on the judgment’s influence on the GDPR, see Luca Tosoni, ‘The Fundamental Right to Cybersecurity: Inception, Implication and Limits’ (PhD dissertation, University of Oslo 2022; forthcoming) ch 7 (sections 7.5.1 and 7.6.3) and references cited therein.
  • 106
    However, Advocate General Cruz Villalón referenced the Bundesverfassungsgericht’s subsequent judgment on the constitutionality of the German government’s transposition of the DRD. See his Opinion delivered 12 December 2013 in Digital Rights Ireland para 72 (footnote 66).
  • 107
    See its judgment on the constitutionality of Germany’s transposition of the DRD: judgment of 2 March 2010 (First Senate), 1 BvR 256/08 (ECLI:DE:BVerfG:2010:rs20100302.1bvr025608) para 224 (‘The Basic Law does not lay down in detail what specific security measures are required. Ultimately, however, a standard must be guaranteed which, specifically taking into account the special features of the data pools created by precautionary storage of telecommunications traffic data, guarantees a particularly high degree of security. In this connection, it must be ensured that this standard—for example by recourse to legal concepts of non-constitutional law such as the state of the art …—is oriented to the state of development of the discussion between specialists and constantly absorbs new knowledge and insights. …’).
  • 108
    As recital 49 NISD makes abundantly clear. Further on these differences and their rationale, see eg Dimitra Markopoulou, Vagelis Papakonstantinou and Paul de Hert, ‘The New EU Cybersecurity Framework: The NIS Directive, ENISA’s Role and the General Data Protection Regulation’ (2019) 35(6) Computer Law & Security Review 105336, 5-6 <https://doi.org/10.1016/j.clsr.2019.06.007>.
  • 109
    Nonetheless, Member States may broaden the scope of application of these rules when transposing them at national level, and this opportunity has been exploited, albeit in a rather uncoordinated and inconsistent way. See further European Commission, Directorate-General for Communications Networks, Content and Technology, Gabor Endrodi, George Maridis, Stefan Schmitz and others, ‘Study to support the review of Directive (EU) 2016/1148 concerning measures for a high common level of security of network and information systems across the Union (NIS Directive), No. 2020-665 : final study report’ (European Commission 2021) <https://data.europa.eu/doi/10.2759/184749> 11ff.
  • 110
    Arnbak (n 7) 43. Arnbak’s remark concerns recital 25 in the Commission’s proposal for the NISD (COM/2013/048 final – 2013/0027 (COD)). Recital 25 read: ‘Technical and organisational measures imposed to [sic] public administrations and market operators should not require that a particular commercial information and communications technology product be designed, developed or manufactured in a particular manner.’ The provisions of recital 25 ended up being reformulated slightly and placed in recital 51 NISD, but their basic thrust regarding design remained unchanged in that process.
  • 111
    In respect of digital service providers, this focus is reinforced by the requirements in Article 2 of Commission Implementing Regulation (EU) 2018/151 of 30 January 2018 laying down rules for application of Directive (EU) 2016/1148 of the European Parliament and of the Council as regards further specification of the elements to be taken into account by digital service providers for managing the risks posed to the security of network and information systems and of the parameters for determining whether an incident has a substantial impact [2018] OJ L26/48.
  • 112
    See n 64.
  • 113
    See n 78.
  • 114
    Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC [2014] OJ L153/62.
  • 115
    Commission Delegated Regulation (EU) 2022/30 of 29 October 2021 supplementing Directive 2014/53/EU of the European Parliament and of the Council with regard to the application of the essential requirements referred to in Article 3(3), points (d), (e) and (f), of that Directive [2022] OJ L7/6. Article 3(3)(d) RED concerns network protection, Article 3(3)(e) RED concerns privacy and protection of personal data, while Article 3(3)(f) RED concerns protection from fraud. The regulation applies, as a point of departure, to ‘any radio equipment that can communicate itself over the internet, whether it communicates directly or via any other equipment’ (Article 1(1)). By using delegated powers, the Commission has avoided the co-decision procedures that otherwise apply to EU legislative proposals, thereby speeding up the imposition of cybersecurity rules for IoT manufacturers. The latter are given a grace period of thirty months in which to adapt to the rules.
  • 116
    See n 63.
  • 117
    2021 State of the Union address by President von der Leyen, Strasbourg, 15 September 2021, <https://ec.europa.eu/commission/presscorner/detail/ov/SPEECH_21_4701>. The details of the Cyber Resilience Act proposal were not published at the time of finalising this article.
  • 118
    See n 66.
  • 119
    Proposal for a Directive of the European Parliament and of the Council on the resilience of critical entities (COM(2020) 829 final). The CED proposal primarily concerns resilience in respect of ‘all relevant natural and man-made risks, including accidents, natural disasters, public health emergencies, antagonistic threats, including terrorist offences’ (Article 4(1)). At the same time, the Commission recognises the close connections and interdependency between security of physical and digital infrastructures. Accordingly, the thrust of Articles 10 (risk assessment) and 11 (resilience measures) of the proposal are in line with the thrust of SbD ideals, even if they lack explicit reference to a ‘design’ element.
  • 120
    See n 68.
  • 121
    An example being industrial control systems (ie systems that steer industrial operations, such as gas production or oil refinement), which lack a comprehensive, sui generis security framework under EU law: see Dimitra Markopoulou and Vagelis Papakonstantinou, ‘The Regulatory Framework for the Protection of Critical Infrastructures against Cyberthreats: Identifying Shortcomings and Addressing Future Challenges: The Case of the Health Sector in Particular’ (2021) 35(6) Computer Law & Security Review 105336, 7, 12 <https://doi.org/10.1016/j.clsr.2020.105502>. However, if adopted, the CED proposal (n 119) will go a significant way towards remedying this gap.
  • 122
    See also Bygrave (n 54) 118.
  • 123
    See also Pieter Wolters, ‘The Security of Personal Data under the GDPR: A Harmonized Duty or a Shared Responsibility?’ (2017) 7(3) International Data Privacy Law 165 <https://doi.org/10.1093/idpl/ipx008>.
  • 124
    Protection of Privacy Regulations (Data Security) 5777-2017, promulgated 5 April 2017 and in force 8 May 2018.
  • 125
    See further Haber and Tamò-Larrieux (n 6) 5-8.
  • 126
    Californian Civil Code Part 4, Division 3, Title 1.81.26 (‘Security of Connected Devices’) commencing at §1798.91.04(a).
  • 127
    Michael Veale and Ian Brown, ‘Cybersecurity’ (2020) 9(4) Internet Policy Review 1, 9 <https://doi.org/10.14763/2020.4.1533>.
  • 128
    The legislation does not apply to health-care providers (§1798.91.06(h)) or to devices subject to federal security requirements (§1798.91.06(d)). It does not give rise to a private right of action (§1798.91.06(e)) nor a duty of compliance on a ‘manufacturer of a connected device related to unaffiliated third-party software or applications that a user chooses to add to a connected device’ (§1798.91.06(a)) It also refrains from imposing a duty on intermediaries and other ‘gatekeepers’ to enforce compliance (§1798.91.06(b)).
  • 129
    Robert Lemos, ‘California’s IoT Security Law Causing Confusion’ InformationWeek (19 September 2019) <https://www.darkreading.com/iot/californias-iot-security-law-causing-confusion/d/d-id/1335863>.
  • 130
    Bruce Schneier, Click Here to Kill Everybody (WW Norton and Company 2018); Norwegian ICT Security Committee (n 33) 9-10.
  • 131
    A good example being the 2017 ‘Equifax hack’, which compromised sensitive information on over 148 million people, predominantly in North America, and which Equifax was overly slow to report. The revelations about the poor security measures Equifax allegedly had in place at the time of the attack (as documented in the class action initially brought before the US District Court for the Northern District of Georgia Atlanta Division (Civil Action File No 17-CV-3463-TWT)), almost beggar belief: see <http://securities.stanford.edu/filings-documents/1063/EI00_15/2019128_r01x_17CV03463.pdf>.
  • 132
    Ross Anderson and Tyler Moore, ‘The Economics of Information Security’ (2006) 314 Science 610-613; Hadi Asghari, Michel van Eeten and Johannes M Bauer, ‘Economics of Cybersecurity’ in Johannes M Bauer and Michael Latzer (eds), Handbook on the Economics of the Internet (Edward Elgar 2016) 262-87.
  • 133
    Again, a good example being the 2017 ‘Equifax hack’ (n 131). The hack was deeply problematic for many reasons, including its indirect costs (eg in terms of detriment to the level of trust in the US economy): see eg Tyler Moore, ‘On the Harms Arising from the Equifax Data Breach of 2017’ (2017) 19 International Journal of Critical Infrastructure Protection 47-48 <https://doi.org/10.1016/j.ijcip.2017.10.004>. Yet, while the scandal associated with the hack caused considerable reputational damage and short-term financial pain for Equifax—not least owing to an expensive class action settlement agreement with the US Federal Trade Commission in 2019—the company’s longer term business interests appear to have escaped indelible harm: see Irini Kanaris Miyashiro, ‘Case Study: Equifax Data Breach’ (30 April 2021) <https://sevenpillarsinstitute.org/case-study-equifax-data-breach/>.
  • 134
    Ross Anderson, Security Engineering (3rd edn, Wiley 2020) 294ff; UK Department for Digital, Culture, Media & Sport (n 30) 9; Norwegian ICT Security Committee (n 33) 57.
  • 135
    UK Department for Digital, Culture, Media & Sport (n 30) 16.
  • 136
    Asghari and others (n 132) 267. Cf Jennifer A Chandler, ‘Negligence Liability for Breaches of Data Security’ (2008) 23(2) Banking and Finance Law Review 223, 228 (noting evidence of a ‘growing desensitization to data security breaches’ in North America which may reduce the impetus for class actions in response to such breaches).
  • 137
    George A Akerlof, ‘The Market for “Lemons”: Quality Uncertainty and the Market Mechanism’ (1970) 84(3) Quarterly Journal of Economics 488.
  • 138
    Anderson and Moore (n 132) 611; Asghari and others (n 132) 265. Traces of this dynamic are evident in recent empirical surveys focused on consumer purchasing patterns for connected devices. See eg Meredydd Williams, Jason RC Nurse and Sadie Creese, ‘“Privacy is the Boring Bit”: User Perceptions and Behaviour in the Internet of Things’ (Proceedings of the 15th International Conference on Privacy, Security and Trust, 2017) <https://arxiv.org/abs/1807.05761> (finding in a recent survey of UK consumers that while many were aware of the privacy and security risks with consumer IoT devices, the overwhelming majority were prepared to purchase them because of their perceived functionality; only 9% of respondents considered the privacy and security risks to be a reason for not purchasing such a device); Ipsos MORI, Consumer Attitudes to IoT Security (December 2020) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/978685/Consumer_Attitudes_Towards_IoT_Security_-_Research_Report.pdf> (documenting a survey commissioned by the UK Department for Digital, Culture, Media and Sport in 2020, which found that only around twenty percent of surveyed UK consumers check if a smart device operates with a generic password and change it).
  • 139
    See Johan David Michels and Ian Walden, ‘Beyond “Complacency and Panic”: Will the NIS Directive Improve the Cybersecurity of Critical National Infrastructure?’ (2020) 45(1) European Law Review 25, 29-30 and references cited therein.
  • 140
    See eg Myriam Dunn Cavelty, ‘Breaking the Cyber-Security Dilemma: Aligning Security Needs and Removing Vulnerabilities’ (2014) 20(3) Science and Engineering Ethics 701, 710 <https://doi.org/10.1007/s11948-014-9551-y>.
  • 141
    Request for Comments (RFC) 1958, Architectural Principles of the Internet (Brian Carpenter (ed)) (June 1996) para 2.3, <http://www.ietf.org/rfc/rfc1958.txt>. For discussion of the principle’s significance, see eg Barbara Schewick, Internet Architecture and Innovation (MIT Press 2010) 57ff.
  • 142
    Schneier (n 130) ch 6.
  • 143
    Sarah Spiekermann, Jana Korunovska and Marc Langheinrich, ‘Inside the Organization: Why Privacy and Security Engineering is a Challenge for Engineers’ (2019) 107(3) Proceedings of the IEEE 600, 612 <https://doi.org/10.1109/JPROC.2018.2866769>.
  • 144
    See the regulation’s provisions on controller responsibility (Article 24) and data protection impact assessment (Article 35). For elaboration of the role of the risk-based approach in the current EU data protection regime, see eg Raphael Gellert, The Risk-Based Approach to Data Protection (Oxford University Press 2020); Claudia Quelle, ‘The “Risk Revolution” in EU Data Protection Law: We Can’t Have Our Cake and Eat It, Too’ in Ronald Leenes, Rosamunde van Brakel, Serge Gutwirth and Paul De Hert (eds), Data Protection and Privacy: The Age of Intelligent Machines (Hart Publishing 2017) 33-62; Milda Macenaite, ‘The “Riskification” of European Data Protection Law through a Two-fold Shift’ (2017) 8 European Journal of Risk Regulation 506, 517-32 <https://doi.org/doi:10.1017/err.2017.40>; Karen Yeung and Lee A Bygrave, ‘Demystifying the Modernised European Data Protection Regime: Cross-disciplinary Insights from Legal and Regulatory Governance Scholarship’ (2022) 16(1) Regulation & Governance 137, 144-47 <https://doi.org/10.1111/rego.12401>.
  • 145
    See eg Ann Cavoukian, ‘Privacy by Design: The 7 Foundational Principles’ (August 2009; revised January 2011) <https://www.ipc.on.ca/wp-content/uploads/Resources/7foundationalprinciples.pdf>; Ira S Rubenstein and Nathaniel Good, ‘Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents’ (2013) 28(2) Berkeley Technology Law Journal 1333; Peter Schaar, ‘Privacy by Design’ (2010) 3 Identity in the Information Society 267 <https://doi.org/10.1007/s12394-010-0055-x>; Ira S Rubinstein, ‘Regulating Privacy by Design’ (2011) 26(3) Berkeley Technology Law Journal 1409; George Danezis and others, Privacy and Data Protection by Design – From Policy to Engineering (ENISA 2014); Michael Birnhack, Eran Toch and Irit Hadar, ‘Privacy Mindset, Technological Mindset’ (2014) 55(1) Jurimetrics 55; Demetrius Klitou, ‘A Solution, But Not a Panacea for Defending Privacy: The Challenges, Criticism and Limitations of Privacy by Design’ in Bart Preneel and Demosthenes Ikonomou (eds), Privacy Technologies and Policy: First Annual Privacy Forum, APF 2012 (Springer Verlag 2014) 86-110; Dag Wiese Schartum, ‘Making Privacy By Design Operative’ (2016) 24(2) International Journal of Law and Information Technology 151 <https://doi.org/10.1093/ijlit/eaw002>; Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies (Harvard University Press 2018); Tamò-Larrieux (n 7); Line Jasmontaite, Irene Kamara, Gabriela Zanfir-Fortuna and Stefano Leucci, ‘Data Protection by Design and by Default: Framing Guiding Principles into Legal Obligations in the GDPR’ (2018) 4(2) European Data Protection Law Review 168 <https://doi.org/10.21552/edpl/2018/2/7>; Avner Levin, ‘Privacy by Design by Regulation: The Case Study of Ontario’ (2018) 4(1) Canadian Journal of Comparative and Contemporary Law 115; Ari Ezra Waldman, ‘Privacy’s Law of Design’ (2019) 9(5) UC Irvine Law Review 1239; Jaap-Henk Hoepman, Privacy Is Hard and Seven Other Myths: Achieving Privacy through Careful Design (MIT Press 2021).
  • 146
    See eg Batya Friedman, Peter H Kane Jnr and Alan Borning, ‘Value Sensitive Design and Information Systems’ in Kenneth Einar Himmar and Herman T Tavani (eds), The Handbook of Information and Computer Ethics (Wiley 2008) 69-101; Sarah Spiekermann, Ethical IT Innovation: A Value-Based System Design Approach (Taylor & Francis 2016); Richard Owen, René von Schomberg and Phil Macnaghten, ‘An Unfinished Journey? Reflections on a Decade of Responsible Research and Innovation’ (2021) 8(2) Journal of Responsible Innovation 217 <https://doi.org/10.1080/23299460.2021.1948789>.
  • 147
    See eg Mireille Hildebrandt and Bert-Jaap Koops, ‘The Challenges of Ambient Law and Legal Protection in the Profiling Era’ (2010) 73(3) Modern Law Review 428, 429 <https://doi.org/10.1111/j.1468-2230.2010.00806.x>.
  • 148
    See eg Eberhard Bekker and others (eds), Digital Rights Management: Technological, Economic, Legal and Political Aspects (Springer 2003) concerning ‘Digital Rights Management Systems’; Niva Elkin-Koren, ‘Fair Use by Design’ (2017) 64 UCLA Law Review 1082 (arguing that ‘fair use’ considerations need to be embedded into algorithms that (co-)determine how copyrighted material is exploited).
  • 149
    See eg Hanne Marie Motzfeldt, ‘The Danish Principle of Administrative Law by Design’ (2017) 23(4) European Public Law 739 (describing the emergence in Denmark of a general ‘principle of administrative law by design’ aimed at ensuring that technology respects core administrative law requirements).
  • 150
    See eg Joshua A Kroll and others, ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633, 640 (arguing that the accountability of automated decisional systems is best ensured by making ‘accountability part of the system’s design from the start’).
  • 151
    See eg European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) and European Parliament resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics (2018/2088(INI)) (emphasising that ‘the trend towards automation requires that those involved in the development and commercialisation of artificial intelligence applications build in security and ethics at the outset’, that robots ‘should be designed using processes that ensure their safety and security’ and that ‘any AI model deployed should have ethics by design’).
  • 152
    See eg Expert Group on Liability and New Technologies – New Technologies Formation, Liability for Artificial Intelligence and other Emerging Digital Technologies (European Union 2019) 47-48 (proposing a qualified duty on ‘producers to equip technology with means of recording information about the operation of the technology (logging by design)’); Paul Nemitz, ‘Constitutional Democracy and Technology in the Age of Artificial Intelligence’ (2018) 376(2133) Philosophical Transactions of the Royal Society A 20180089 <https://doi.org/10.1098/rsta.2018.0089> (proposing a ‘principle of rule of law, democracy and human rights by design in AI’).
  • 153
    See also Lee A Bygrave, ‘Hardwiring Privacy’ in Roger Brownsword, Eloise Scotford and Karen Yeung (eds), The Oxford Handbook of Law, Regulation, and Technology (Oxford University Press 2017) 754, 755.
  • 154
    See eg recital 41 CA and recital 97 ECC, both set out in Section 3.1. See also Commission Recommendation of 9 May 2009 on the implementation of privacy and data protection principles in applications supported by radio-frequency identification [2009] OJ L 122/47 recital 6 (‘privacy and information security features should be built into RFID applications before their widespread use (principle of “security and privacy-by-design”)’).
  • 155
    Cavoukian (n 145). See also Schaar (n 145) 271 (‘Privacy by Design means first of all data security’).
  • 156
    Ann Cavoukian, ‘International Council on Global Privacy and Security, by Design’ (2016) 35(5) IEEE Potentials 43 <https://doi.org/10.1109/MPOT.2016.2569741>.
  • 157
    For instructive discussion of the interrelationship of the various values served by data protection and data security respectively, see Ibo van der Poel, ‘Core Values and Value Conflicts in Cybersecurity: Beyond Privacy versus Security’ in Markus Christen, Bert Gordijn and Michele Loi (eds), The Ethics of Cybersecurity (Springer 2020) 45-71.
  • 158
    See also Bygrave (n 153) 761.
  • 159
    See also EDPB (n 79), which goes out of its way to distinguish DPbDD from PbD.
  • 160
    Arnbak (n 7) 8.
  • 161
    Lucia Zedner, ‘The Concept of Security: An Agenda for Comparative Analysis’ (2003) 23(1) Legal Studies 153, 154 <https://doi.org/10.1111/j.1748-121X.2003.tb00209.x>.
  • 162
    ibid 176.
  • 163
    See eg Dunn Cavelty (n 140) 703ff.
  • 164
    Arnbak (n 7) 50.
  • 165
    ibid 51.
  • 166
    See also Anderson (n 134) 16 (‘in general, robust security design requires that the protection goals are made explicit’).
  • 167
    Tamò-Larrieux (n 7) 105. See also Joseph S Nye, Jr, ‘Power and National Security in Cyberspace’ in Kristin M Lord and Travis Sharp (eds), America’s Cyber Future Security and Prosperity in the Information Age (Center for a New American Security 2011) vol II 7, 15 (‘Security is the absence or reduction of threat to key values’).
  • 168
    See eg Dominik Hermann and Henning Pridöhl, ‘Basic Concepts and Models of Cybersecurity’ in Markus Christen, Bert Gordijn and Michele Loi (eds), The Ethics of Cybersecurity (Springer 2020) 11, 13; Charles P Pfleeger, Shari Lawrence Pfleeger and Jonathan Margulies, Security in Computing (5th edn, Pearson 2015) section 1.2; ISO/IEC 27000:2018(E) (Information technology – Security techniques – Information security management systems – Overview and vocabulary) clauses 3.10, 3.36, 3.7.
  • 169
    See eg James PG Sterbenz, David Hutchison, Egemen K Çetinkaya, Abdul Jabbar, Justin P Rohrer, Marcus Schöller and Paul Smith, ‘Resilience and Survivability in Communication Networks: Strategies, Principles, and Survey of Disciplines’ (2010) 54(8) Computer Networks 1245, 1246 <https://doi.org/10.1016/j.comnet.2010.03.005> (defining network resilience as ‘the ability of the network to provide and maintain an acceptable level of service in the face of various faults and challenges to normal operation’); Fredrik Björck, Martin Henkel, Janis Stirna and Jelena Zdravkovic, ‘Cyber Resilience – Fundamentals for a Definition’ in Alvaro Rocha, Ana Maria Correia, Sandra Costanzo and Luis Paulo Reis (eds), New Contributions in Information Systems and Technologies (Springer 2015) 311, 315 <https://doi.org/10.1007/978-3-319-16486-1_31> (defining ‘cyber resilience’ as ‘the ability to continuously deliver the intended outcome despite adverse cyber events’).
  • 170
    ISO/IEC (n 168) clause 3.55.
  • 171
    ibid clause 3.6.
  • 172
    ibid clause 3.48.
  • 173
    Such as the famous ‘orange book’: see US Department of Defense, ‘Trusted Computer System Evaluation Criteria’, DOD5200.28-STD (December 1985) at, inter alia, 27-28, 30-31 and 38-39.
  • 174
    Indeed, some policy entrepreneurs argue that ‘cyber resilience’ ought to replace ‘cybersecurity’ as an overarching goal of information systems development as it allegedly embraces a more realistic approach to threat management. See further Section 6.1.
  • 175
    Algirdas Avizienis, Jean-Claude Laprie, Brian Randell and Carl Landwehr, ‘Basic Concepts and Taxonomy of Dependable and Secure Computing’ (2004) 1(1) IEEE Transactions on Dependable and Secure Computing 11 <https://doi.org/10.1109/TDSC.2004.2>; Sterbenz and others (n 169).
  • 176
    Indeed, Hunsbedt notes that only three language versions of EU product safety legislation do not use the same term for both concepts. In addition to English, these are Bulgarian (‘безопасен’ and ‘сигурност’) and Estonian (‘ohutus’ and ‘turvalisuse’). However, as Hunsbedt aptly remarks, ‘the use of wider-reaching terms in several languages does not preclude a narrower construction (e.g., equivalent to the English “safety” term) in legal texts’. See Christiane Hunsbedt, Security Requirements for Connected Consumer Products, CompLex 1/2021 (University of Oslo 2021) 12.
  • 177
    Herrmann and Pridöhl (n 168) 14.
  • 178
    See eg Ludovic Piètre-Cambacédès and Claude Chaudet, ‘The SEMA Referential Framework: Avoiding Ambiguities in the Terms “Security” and “Safety”’ (2010) 3(2) The International Journal of Critical Infrastructure Protection 55 <https://doi.org/10.1016/j.ijcip.2010.06.003>.
  • 179
    ibid 59.
  • 180
    See eg Avizienis and others (n 175) 13 (defining ‘safety’ as ‘absence of catastrophic consequences on the user(s) and the environment’).
  • 181
    William Young and Nancy G Levenson, ‘An Integrated Approach to Safety and Security Based on Systems Theory’ (2014) 57(2) Communications of the ACM 31 <https://doi.org/10.1145/2556938>.
  • 182
    See generally Anderson (n 134) 970ff.
  • 183
    Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys [2009] OJ L 170/11.
  • 184
    See eg ANEC and BEUC (n 29) 10-11.
  • 185
    For systematic analysis supporting this view, see Hunsbedt (n 176) ch 3. The mismatch between current product safety legislation and cybersecurity concerns is not just a function of the semantic differences between ‘safety’ and ‘security’. For example, doubt arises over whether the concept of ‘product’ extends to mere software, a fortiori when the software is not ‘embedded’ in hardware: see eg EU Expert Group on Liability and New Technologies – New Technologies Formation, Liability for Artificial Intelligence and other Emerging Digital Technologies (EU 2019) 28.
  • 186
    See eg Eduard Fosch-Villaronga and Tobias Mahler, ‘Cybersecurity, Safety and Robots: Strengthening the Link between Cybersecurity and Safety in the Context of Care Robots’ (2021) 41 Computer Law & Security Review 105528 <https://doi.org/10.1016/j.clsr.2021.105528>; Hunsbedt (n 176) ch 5; EU Expert Group on Liability and New Technologies – New Technologies Formation (n 184) 48-49.
  • 187
    Young and Levenson (n 181) 33.
  • 188
    See also eg Nektarios Karanikas, ‘Revisiting the Relationship Between Safety and Security’ (2018) 8(4) International Journal of Safety and Security Engineering 547 <https://doi.org/10.2495/SAFE-V8-N4-547-551>; Alberto Martinetti, Peter K Chemweno, Kostas Nizamis, and Eduard Fosch-Villaronga, ‘Redefining Safety in Light of Human-Robot Interaction: A Critical Review of Current Standards and Regulations’ (2021) 3 Frontiers in Chemical Engineering 666237 <https://doi.org/10.3389/fceng.2021.666237>.
  • 189
    ENISA (n 19) 47 (‘Security must consider the risk posed to human safety’).
  • 190
    See eg Article 4(2) NISD, Article 32(1) GDPR, Article 2(21) ECC and Article 46(2) CA. But note the Medical Devices Regulation (n 59) which imposes SbD as part of the regulation’s general ‘safety’ requirements.
  • 191
    The case, for instance, with the GDPR: see especially Article 4(12) (‘personal data breach’ means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data …’ (emphasis added)); Article 5(1)(f) (personal data shall be ‘processed in a manner that ensures appropriate security …, including protection against … accidental loss, destruction or damage …’ (emphasis added)); recital 83 (‘In assessing data security risk, consideration should be given to the risks that are presented by personal data processing, such as accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data …’ (emphasis added)).
  • 192
    See eg Article 4(2) NISD (defining ‘security of network and information systems’ as ‘the ability … to resist, at a given level of confidence, any action that compromises’ the security of such systems (emphasis added)).
  • 193
    Recall the Council’s General Approach to the NIS2D proposal, set out in Section 3.5.
  • 194
    See amendment of EHSR 1.2.1 on safety and reliability of control systems in the Commission’s proposal for a Regulation on machinery products (n 65).
  • 195
    See also Article 2(21) ECC.
  • 196
    See also Sterbenz and others (n 169).
  • 197
    See Article 2(2) CED proposal (n 119) (defining ‘resilience’ as ‘the ability to prevent, resist, mitigate, absorb, accommodate to [sic] and recover from an incident that disrupts or has the potential to disrupt the operations of a critical entity’).
  • 198
    See eg Article 1(1) NIS2D proposal (n 64) (‘This Directive lays down measures with a view to ensuring a high common level of cybersecurity within the Union’) and recital 59 of the same proposal (‘Maintaining accurate and complete databases of domain names and registration data (so called ‘WHOIS data’) and providing lawful access to such data is essential to ensure the security, stability and resilience of the DNS, which in turn contributes to a high common level of cybersecurity within the Union’).
  • 199
    See eg Legislative Financial Statement accompanying NIS2D proposal (n 64) 2 (‘The revision’s objective is to increase the level of cyber resilience of a comprehensive set of businesses operating in the European Union across all relevant sectors, to reduce inconsistencies in the resilience across the internal market in the sectors already covered by the Directive and to improve the level of joint situational awareness and the collective capability to prepare and respond’).
  • 200
    See eg recital 39 GDPR (‘Personal data should be processed in a manner that ensures appropriate security and confidentiality of the personal data, including for preventing unauthorised access to or use of personal data and the equipment used for the processing’).
  • 201
    For similar criticism, see Lukas Feiler, Nikolaus Forgó and Michaela Weigl, The EU Data Protection Regulation (GDPR): A Commentary (German Law Publishers 2018) 75-76.
  • 202
    See also Schartum (n 84) 262 (noting the lack of a systematic and well-structured conceptual approach to security norms in the GDPR).
  • 203
    Bruno Latour, ‘A Cautious Prometheus? A Few Steps Toward a Philosophy of Design (with Special Attention to Peter Sloterdijk)’ in Fiona Hackney, Jonathan Glynne and Viv Minto (eds), Networks of Design: Proceedings of the 2008 Annual International Conference of the Design History Society (Universal Publishers 2008) 2 (‘From a surface feature in the hands of a not-so-serious-profession that added features in the purview of much-more-serious-professionals (engineers, scientists, accountants), design has been spreading continuously so that it increasingly matters to the very substance of production. What is more, design has been extended from the details of daily objects to cities, landscapes, nations, cultures, bodies, genes, and, as I will argue, to nature itself—which is in great need of being re-designed. It is as though the meaning of the word has grown in what logicians refer to as “comprehension” and “extension”’).
  • 204
    Hartzog (n 145) 11.
  • 205
    ibid 12.
  • 206
    Id.
  • 207
    See also the concurring remarks in Richmond Y Wong and Deirdre K Mulligan, ‘Bringing Design to the Privacy Table: Broadening “Design” in “Privacy by Design” Through the Lens of HCI’ in CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4-9, 2019, Glasgow, Scotland UK (ACM 2019) paper 262, 3 <https://doi.org/10.1145/3290605.3300492>; Waldman (n 145).
  • 208
    Judy McKay, Peter Marshall and Greg Heath, ‘An Exploration of the Concept of Design in Information Systems’ in Shirley D Gregor and Dennis N Hart (eds), Information Systems Foundations: The Role of Design Science (ANU E Press 2010) 91, 92. See also Paul Ralph and Yair Wand, ‘A Proposal for a Formal Definition of the Design Concept’ in Kalle Lyytinen, Pericles Loucopoulos, John Mylopoulos and Bill Robinson (eds), Design Requirements Engineering: A Ten-Year Perspective (Springer 2009) 103, 103 <https://doi.org/10.1007/978-3-540-92966-6_6>.
  • 209
    McKay and others (n 208) 92.
  • 210
    Bob Jessop, ‘Governance, Governance Failure, and Meta-Governance’, monograph presented at International Seminar on ‘Policies, Governance and Innovation for Rural Areas’, 21-23 November 2003, Università della Calabria, 4 <https://ceses.cuni.cz/CESES-136-version1-3B_Governance_requisite_variety_Jessop_2002.pdf>.
  • 211
    The reference to ‘object’ denotes artifacts, systems or processes, and is therefore broader than simply physical objects: Ralph and Wand (n 208) 105, 108.
  • 212
    The reference to ‘primitive’ denotes ‘the lowest level’ of the components (physical or conceptual) used to create the design object: ibid 106.
  • 213
    ibid 108.
  • 214
    Id.
  • 215
    ibid 110.
  • 216
    Id.
  • 217
    Winston W Royce, ‘Managing the Development of Large Software Systems’ in Technical Papers of the IEEE Western Electronic Show and Convention (IEEE 1970) 1-9.
  • 218
    Peter Freeman and David Hart, ‘A Science of Design for Software-Intensive Systems’ (2004) 47(8) Communications of the ACM 19, 20 <https://doi.org/10.1145/1012037.1012054>.
  • 219
    See also, inter alia, Alan R Hevner, Salvatore T March, Jinsoo Park and Sudha Ram, ‘Design Science Research in Information Systems Research’ (2004) 28(1) MIS Quarterly 75, 85 (‘design is inherently an iterative and incremental activity’). Viewing design as such a process is far from recent; already in the 1980s, some of the design-specific scholarship advocated this view: see eg Chris J Jones, ‘Softecnica’ in John Thackara (ed), Design after Modernism: Beyond the Object (Thames & Hudson 1988) 216-26.
  • 220
    Waldman (n 145) 1250 and references cited therein.
  • 221
    ibid 1252.
  • 222
    Hartzog (n 145) 11.
  • 223
    See eg Douglas Schuler and Aki Namioka (eds), Participatory Design: Principles and Practices (Lawrence Erlbaum Associates 1993).
  • 224
    Irina Brass and Jesse H Sowell, ‘Adaptive governance for the Internet of Things: Coping with emerging security risks’ (2021) 15(4) Regulation & Governance 1092 <https://doi.org/10.1111/rego.12343>.
  • 225
    ibid 1107.
  • 226
    Ware (n 12) iv.
  • 227
    ISO/IEC (n 168) 14. Further on the ISMS concept, see espec ISO/IEC (n 82) and TeleTrusT and ENISA (n 80) 67ff. Further on the necessity (and challenges) of building sensible organisational frameworks for security risk management in the computing context, see Anderson (n 134) 1000ff.
  • 228
    See Case C-340/21, Request for a preliminary ruling from the Varhoven administrativen sad (Bulgaria) lodged on 2 June 2021—VB v Natsionalna agentsia za prihodite [2021] OJ C 329/12. The case concerns the implications of Articles 24 and 32 GDPR for a controller that has been the victim of an extrinsic hacking attack involving unauthorised access to and disclosure of personal data. More particularly, the CJEU has been requested to address, inter alia: (i) whether such disclosure creates a presumption that the controller has failed to implement the requisite ‘technical and organisational measures’ under these provisions; (ii) who bears the burden of proving that the implemented measures are appropriate pursuant to Article 32; and (iii) whether damage that is to be compensated for under Article 82 GDPR may extend to the psychological distress suffered by the data subjects because of their fears over potential misuse of the hacked data but where no evidence has been adduced that the data have been misused or that the data subjects have otherwise been harmed.
  • 229
    Bruce Schneier, ‘Risks of Relying on Cryptography’ (1999) 42(1) Communications of the ACM 144 <https://doi-org.ezproxy.uio.no/10.1145/317665.317684>.
  • 230
    In this regard, the ‘state of the art’ criterion also undermines the traditional (yet contested) distinction between lex lata and lex ferenda.
  • 231
    See eg Articles 14(1) and 16(1) NISD, Article 32(1) GDPR, Article 4(1) EPD, Article 40(1) ECC and Article 19(1) eIDAS Regulation. Note also recital 8 CA (‘Cybersecurity is not only an issue related to technology, but one where human behaviour is equally important’).
  • 232
    OECD (n 35).
  • 233
    See also recital 75 CA (‘It is not possible to set out in detail the cybersecurity requirements relating to all ICT products, ICT services and ICT processes in this Regulation. ICT products, ICT services and ICT processes and the cybersecurity needs related to those products, services and processes are so diverse that it is very difficult to develop general cybersecurity requirements that are valid in all circumstances’). In a case study on the attempted implementation of PbD in a Canadian context, Levin makes a similar point, urging a mix of rigidity (with regard to insistence on the principle’s normative importance) and flexibility (with regard to measures for realising the principle): Levin (n 145) 155-56.
  • 234
    See also Schartum (n 84) 264.
  • 235
    As the case law cited in Section 3.4 suggests, along with the accountability principle laid out in Article 5(2) GDPR and operationalised in Articles 24(1) and 25(1) GDPR. With regard to the GDPR’s use of the phrase, see also Dag Wiese Schartum, ‘“Technical and Organisational Measures”—A Systematic Analysis of Required Data Protection Measures in the GDPR’ in Jean Herveg (ed), Deep Diving into Data Protection: 1979-2019 : Celebrating 40 Years of Research on Privacy and Data Protection at the CRIDS (Larcier 2021) 289, 295 (‘Both concepts [‘technical’ and ‘organisational’] represent rich possibilities of dynamic development of interpretation, in line with technical and societal developments. Given the extremely wide scope of the GDPR and the complexity and discretionary nature of the Regulation with the myriad of deriving [sic] legal questions which could become topical, wide and flexible concepts are desirable, if not necessary’).
  • 236
    See also Schartum (n 235) 295.
  • 237
    See eg Donald A Schön, Technology and Change: The New Heraclitus (Pergamon Press 1967) 1 (defining technology as ‘any tool or technique, any product or process, any physical equipment or method of doing or making, by which human capability is extended’).
  • 238
    As one of multiple steps to ensure ‘security of processing’ personal data, Regulation (EC) 45/2001 (repealed and replaced by Regulation (EU) 2018/1725) required ‘designing the organisational structure within an institution or body in such a way that it will meet the special requirements of data protection’ (Article 22(2)(j)). The reference to design of organisational structure has disappeared from the face of current EU data protection laws but can be easily read into their more generic references to ‘technical and organisational measures’.
  • 239
    Schartum seems to take the same line, though somewhat more hesitantly: Schartum (n 235) 296ff.
  • 240
    See also EDPB (n 79) para 8 (‘Technical and organisational measures and necessary safeguards can be understood in a broad sense as any method or means that a controller may employ in the processing’ to protect personal data). Cf Schartum (n 84) 253 (interpreting the reference to ‘technical and organisational measures’ in Article 25(1) GDPR so that they concern only IS architecture in the form of IS solutions (‘systemløsninger’) and software (‘programvare’)).
  • 241
    As the presentation in Section 3.3 suggests. This is also exemplified in the Penalty Notice issued in October 2020 by the UK Information Commissioner’s Office (ICO) fining Marriott International Inc for breach of Articles 5(1)(f) and 32 GDPR. The ICO used guidance documents from the UK National Cyber Security Agency and the US National Institute of Standards and Technology as benchmarks for determining the kinds of security measures required by the regulation. See ICO, Penalty Notice of 30 October 2020 (Case Reference COM0804337) paras 6.15-6.17, 6.20, 6.33, 6.35 and 6.41.
  • 242
    See eg recital 50 Digital Content Directive (n 60) (‘When applying the rules of this Directive, traders should make use of standards, open technical specifications, good practices and codes of conduct, including … on the security of information systems and digital environments, whether established at international level, Union level or at the level of a specific industry sector’).
  • 243
    See generally TeleTrusT and ENISA (n 80); ISO/IEC (n 82); ISACA (n 82). See also EDPB (n 79) para 85 (listing a large range of ‘key design and default’ security measures for personal data).
  • 244
    See eg Article 32(1)(c) GDPR, Articles 14(2) and 16(2) NISD and Article 19(1) eIDAS Regulation.
  • 245
    See further Tosoni (n 105) ch 8.
  • 246
    See eg Saltzer and Schroeder (n 14) 1282; Hermann and Pridöhl (n 168) 17.
  • 247
    Brendan van Alsenoy, ‘Liability under EU Data Protection Law: From Directive 95/46 to the General Data Protection Regulation’ (2016) 7(3) Journal of Intellectual Property, Information Technology and Electronic Commerce 271, 284; Wolters (n 123) 172; Cédric Burton, ‘Article 32. Security of Processing’ in Kuner and others (n 46) 630, 637; ICO (n 241) para 6.5.
  • 248
    See also Michels and Walden (n 139) 39 (stating that the NISD requirements ‘are more about process than exact outcomes’ and that ‘not every security breach—no matter how disruptive—should constitute a breach’ of the Directive).
  • 249
    EDPB (n 79) para 13.
  • 250
    ibid para 96.
  • 251
    See also eg the incipit of Article 25(1) GDPR (using the same qualifications as Article 32(1) GDPR) and recital 53 NISD (‘To avoid imposing a disproportionate financial and administrative burden on operators of essential services and digital service providers, the requirements should be proportionate to the risk presented by the network and information system concerned, taking into account the state of the art of such measures. …’).
  • 252
    Burton (n 247) 635-36.
  • 253
    I v Finland (n 90) para 47 (emphasis added).
  • 254
    ibid para 19.
  • 255
    ibid para 44.
  • 256
    ibid para 44.
  • 257
    The measure being a comprehensive logging of the persons who access(ed) patient files: ibid para 41.
  • 258
    Michels and Walden (n 139) 45.
  • 259
    NIS2D proposal (n 64) 5.
  • 260
    See eg Burton (n 247) 630, 637; Waltraut Kotschy, ‘Article 83. General Conditions for Imposing Administrative Fines’ in Kuner and others (n 46) 1180, 1190; Paul Voigt and Axel von dem Bussche, The EU General Data Protection Regulation: A Practical Guide (Springer 2017) 38.
  • 261
    An argument advanced by Marriott International Inc in response to the penalty notice issued by the ICO in October 2020: ICO (n 241) para 7.80. British Airways plc mounted a similar argument, also in response to a penalty notice by the ICO issued in October 2020: see ICO, Penalty Notice of 16 October 2020 (Case Reference COM0783542) para 7.77ff.
  • 262
    And was duly rejected by the ICO for broadly similar reasons to those I present: see ICO (n 241) para 7.80 and ICO (n 261) para 7.77ff.
  • 263
  • 264
    See eg Norway’s DPA (Datatilysnet), Decision of 18 October 2021 concerning Østre Toten municipality (Case Reference 21/00480-10); Sweden’s DPA (Integritetsskyddsmyndigheten), Decision of 26 January 2022 concerning the Uppsala Hospital Board (Case Reference DI-2021-5595); Ireland’s Commissioner for Data Protection, Decision of 2 December 2021 in the matter of The Teaching Council of Ireland (Case Reference IN-20-4-1); ICO, Penalty Notice of 15 November 2021 concerning the UK Cabinet Office (case reference not specified); ICO (n 261); Sweden’s DPA, Decision of 7 June 2021 concerning MedHelp AB (Case Reference DI-2019-3375).
  • 265
    Ireland’s Commissioner for Data Protection (n 264) para 8.56.
  • 266
    See eg The Netherlands’ DPA (Autoriteit Persoonsgegevens), Decision of 26 November 2020 concerning Stichting OLVG (case reference not specified); Sweden’s DPA, Decision of 7 June 2021 concerning Voice Integrate Nordic AB (Case Reference DI-2019-2488).
  • 267
    See eg Spain’s DPA (Agencia Española de Protección de Datos), Decision of 12 April 2022 concerning La Comunidad de Propietarios (Case Reference PS/00043/2021).
  • 268
    For incisive critique, see Mona Naomi Lintvedt, ‘Putting a Price on Data Protection Infringement’ (2022) 12(1) International Data Privacy Law 1 <https://doi.org/10.1093/idpl/ipab024>.
  • 269
  • 270
    See Article 29 Data Protection Working Party, ‘Guidelines on the Application and Setting of Administrative Fines for the Purposes of the Regulation 2016/679’ (WP253, 3 October 2017). The Working Party was the EDPB’s predecessor.
  • 271
    Cf Dunn Cavelty (n 140) 706 (‘In cyber-security as currently understood and practised, human beings are seen as victims, as weakest link in the system, as direct threat—but not (or only very indirectly) as beneficiaries of the type of security that states (and companies) want’).
  • 272
    Björck and others (n 169) 315. See also eg Ray A Rothrock, ‘Digital Network Resilience: Surprising Lessons from the Maginot Line’ (2017) 2(3) The Cyber Defense Review 33, 36–37.
  • 273
    See Lee A Bygrave, ‘Cyber Resilience versus Cybersecurity as Legal Aspiration’ in Tat’ána Jančárková, Gabor Visky and Ingrid Winther (eds), 14th International Conference on Cyber Conflict: Keep Moving (IEEE 2022) 27-44.
  • 274
    Björck and others (n 169) 312 (emphasis added).
  • 275
    ibid 315.
  • 276
    See eg Anderson (n 134); Alan R Hevner, Salvatore T March, Jinsoo Park and Sudha Ram, ‘Design Science Research in Information Systems Research’ (2004) 28(1) MIS Quarterly 75, 88-89; Mareile Kaufmann, ‘Resilience governance and ecosystemic space: a critical perspective on the EU approach to Internet security’ (2015) 33 Environment and Planning D: Society and Space 512, 524 <https://doi.org/10.1177/0263775815594309>.
  • 277
    See Dirk Hovorka and Matt Germonprez, ‘Identification-Interaction-Innovation: A Phenomenological Basis for an Information Services View’ in Shirley D Gregor and Dennis N Hart (eds), Information Systems Foundations: The Role of Design Science (ANU E Press 2010) 3, 5; Alan R Hevner, Salvatore T March, Jinsoo Park and Sudha Ram, ‘Design Science Research in Information Systems Research’ (2004) 28(1) MIS Quarterly 75.
  • 278
    Terry Winograde and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design (Ablex Publishing 1986) 53; Hovorka and Germonprez (n 277) 9.
  • 279
    See entry ‘Roy Amara 1925-2007, American Futurologist’ in Susan Ratcliffe (ed), Oxford Essential Quotations (4th ed, Oxford University Press 2016) 1 <https://doi.10.1093/acref/9780191826719.001.0001>.
  • 280
    See generally Laura DeNardis, The Global War for Internet Governance (Yale University Press 2014); Milton L Mueller, Networks and States: The Global Politics of Internet Governance (MIT Press 2010).
  • 281
    Paul Dourish, Where The Action Is: The Foundations of Embodied Interaction (MIT Press 2001) 131.
  • 282
    Seda Gürses and Joris van Hoboken, ‘Privacy after the Agile Turn’ in Evan Selinger, Jules Polonetsky and Omer Tene (eds), The Cambridge Handbook of Consumer Privacy (Cambridge University Press 2018) 579, 583. See also Blagovesta Kostova, Seda Gürses and Carmela Troncoso, ‘Privacy Engineering Meets Software Engineering. On the Challenges of Engineering Privacy By Design’ (2020) ArXiv (http://arxiv.org/abs/2007.08613).
  • 283
    Carliss Y Baldwin and Kim B Clarke, ‘Managing in an Age of Modularity’ (1997) 75(5) Harvard Business Review 84.
  • 284
    Anderson (n 134) 982.
  • 285
    ibid 987.
  • 286
    See eg Birnhack and others (n 145); Levin (n 145); Bygrave (n 153) espec 764-65.
  • 287
    Spiekermann and others (n 143) 611.
  • 288
    ibid 612 (‘for a substantial amount of engineers it seems it is a lack of perceived responsibility and “lack of zest” that leads them to neglect the two values’).
  • 289
    id.
  • 290
    At the risk of spelling out the obvious, securitisation denotes a discursive modality in which the need to achieve security is given progressively greater priority at the expense of other interests and goals, and involves galvanisation of state actors employing increasingly stringent measures, in reaction to a threat situation that is presented in increasingly alarmist tones. This process is marked by urgency, with political debate stifled under an accelerating sense of threat. See generally Barry Buzan, Ole Weaver and Jaap de Wilde, Security: A New Framework for Analysis (Lynne Rienner Publishers 1998) 23ff.
  • 291
    See Lene Hansen and Helen Nissenbaum, ‘Digital Disaster, Cyber Security, and The Copenhagen School’ (2009) 53(4) International Studies Quarterly 1155 <https://doi.org/10.1111/j.1468-2478.2009.00572.x>.
  • 292
    See eg NIS Cooperation Group, ‘EU coordinated risk assessment of the cybersecurity of 5G networks’ (9 October 2019).
  • 293
    For background, see David Kravets, ‘Sony agrees to pay millions to gamers to settle PS3 Linux debacle’, Ars Technica 21 June 2016 <https://arstechnica.com/tech-policy/2016/06/if-you-used-to-run-linux-on-your-ps3-you-could-get-55-from-sony/>. Note that the settlement agreement referred to by Kravets was subsequently replaced by another.
  • 294
    See eg Paul Kunert, ‘HP to hike upfront price of printer hardware as ink biz growth runs dry’, The Register 9 October 2019 <https://www.theregister.com/2019/10/09/hp_supplies/>.
  • 295
    See eg Hovorka and Germonprez (n 277) 5.
  • 296
    Brass and Sowell (n 224).
  • 297
    OECD, ‘Recommendation of the Council on Digital Security Risk Management for Economic and Social Prosperity’ (n 37) 47.
Copyright © 2021 Author(s)

CC BY 4.0