Cybersecurity Law and the Internet of Things

June 5, 2016

According to Gartner, Inc. the worldwide spending on IoT security will reach $348 million in 2016, a 23.7% increase from 2015 spending of $281.5 million.[1] Gartner also predicts that by 2020, more than 25% of identified attacks in enterprises will involve IoT, although IoT will account for less than 10% of IT security budgets. What is IoT? And why might it be insecure? Does it matter to organisations (and lawyers) considering cybersecurity risks and law?

IoT refers to the idea that most things could benefit from being attached to the internet. IoT devices are typically low-powered computers, connected to the internet and designed to perform a single purpose. This purpose often entails the use of sensors – location, humidity, temperature, light, etc – but IoT is as much about individual devices becoming ‘smart’ as it is about tracking and logistics:

·       in the consumer space, smart TVs, the smart home, and many wearables (especially smart watches) are early examples, typically connected to consumer Wifi networks;[2] and

·       in enterprise and commerce, IoT devices might power smart logistics, supply chain tracking, smart-healthcare / telemedicine, digital wallet and payment devices, etc, typically utilising a broad class of communications technologies that have become known as machine-to-machine (M2M).

Cybersecurity law is a broad church, and as likely to be implemented in domain specific legislation as it is to be of general application.[3] For now, many lawyers charged with advising upon cybersecurity will tend to have a background in data privacy. This article approaches the subject from that perspective, and restricts itself to the more typical IoT examples given above.

It will not have escaped the notice of a reader of this publication that the forthcoming GDPR will impose a breach notification regime and increase the severity of potential sanctions for organisations that suffer a loss of personal data.

At the European Parliament, the first reading of the GDPR included a need to be able to ensure ‘situational awareness’ of threats to sensitive personal data, and to be able to counter such threats ‘in near real time’. [4] That particular provision did not survive the legislative process. The requirement to maintain an appropriate level of security nevertheless remained, taking into account the state of the art, the cost of implementation ‘and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons’.[5]

Of course, the obligation to implement appropriate technical and organisation measures is an existing requirement of the Data Protection Directive,[6] but as the state of the art moves on, so does the complexity of ensuring compliance. Information security is of much higher profile, not to mention sophistication, than it was in 1995. In that sense, ‘situational awareness’ remains a requisite.

Accordingly, the GDPR introduces specific obligations to:

·       regularly test, assess and evaluate the effectiveness of such technical and organisational measures;[7]

·       where possible, be able to produce a record describing those technical and organisational measures;[8] and

·       report certain breaches to the national regulator without delay, and where feasible within 72 hours.[9]

Given these requirements, and the need to understand the threat landscape to help implement ‘privacy by design’,[10] it will be important for privacy professionals to remain abreast of cybersecurity threats and how they are countered.

I consider that there are three key concepts crucial to a high level understanding of the increased cybersecurity risk applicable to IoT in particular:

1.     IoT devices are typically computers that are as prone to viruses and hacking as your desktop PC;

2.     IoT devices are internet based, and the internet is not to be trusted; and

3.     encryption is not a panacea.

I consider each in turn.

1. IoT devices are computers

If most IoT devices to date have been implemented using general purpose computers then securing such general purpose computers should not be a novel requirement. Steps to keep IoT devices secure should differ little from those required to secure any other computer.[11] Any information security management system (ISMS) designed to meet the ISO27001 series of standards, or similar, should encompass all computing devices used by the organisation, including IoT devices.

The GDPR requires that data protection impact assessments are carried out ‘where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons.’[12] In conducting such impact assessment, particular care must be taken to ensure that the security limitations of the relevant IoT technology are well understood, and that organisational ISMSs and data protection policies have been applied accordingly.  

In particular, although IoT devices are often general purpose computers in principle, they are unlikely to be powerful enough to run a full enterprise operating system, such as Microsoft Windows.[13] Existing enterprise security measures, such as typical anti-virus software, are therefore less likely to be compatible with IoT devices.[14]

Instead, organisations implementing IoT for the first time will often need to develop a whole separate information security infrastructure. In doing so, as identified by the Article 29 Working Party,[15] there are several characteristic risks of IoT devices that may deserve scrutiny and mitigation:

·       IoT devices are typically manufactured and operated by subcontractors, and/or have a complex supply chain ‘with multiple stakeholders assuming different degrees of responsibility’;[16]
IoT devices are often created to be low powered, both in terms of computing power and energy usage; implementing advanced encryption and security protection is not always possible within such constraints;[17]

·       IoT devices have often been created with a focus on cost control, which may have been at the expense of security;

·       IoT devices may be configured remotely, and subject to the control of the vendor to issue updates and security patches which may or may not have been forthcoming;[18] and

·       IoT devices to date have often shipped with observed security flaws. [19] 

Any organisation considering using IoT should be clear that its ISMS can be adequately applied to IoT before embarking.

2. The Internet: trusting by default

The internet is somewhat fundamental to IoT, but was not originally developed with information security in mind.

Core internet protocols such as TCP/IP, as well as many of the higher level protocols that make use of them, including email and the web, are often blindly trusting by design. It is surprisingly easy to intercept and even modify communications on the fly (so called ‘man-in-the-middle’ or ‘proxy’ attacks) without arousing suspicion.[20] There is typically little authentication or protection from manipulation built into the relevant protocols. Whilst the internet protocols may trust anyone, the effect is that end-users should trust no-one.[21]

Evidence suggests that man-in-the-middle attacks are already being launched on a dramatic scale, including against networks of financial institutions, voice-over-IP providers and governments.[22] There is every reason to suspect that IoT platforms may be targeted too; the motivation for doing so may be to eavesdrop and extract data, to disrupt the operation of IoT or as a first step in pursuing a wider attack.

One security researcher has been able to develop a proof-of-concept IoT man-in-the-middle attack, by modifying an app as it was transmitted to a smart TV. The researcher was able to deliver malware of his own choosing instead of the requested app.[23] There have also been reports of compromised IoT devices participating in ‘bot-nets’, contributing to distributed denial of service (‘DDoS’) attacks.[24]

Any organisation deploying IoT would be wise to consider whether it has the infrastructure to detect, interrupt and prevent attacks against it and its IoT devices and customers, whether within or outside its firewall, and including man-in-the-middle attacks that may not be easily detected or immediately obvious. Pro-active threat management is part of the state of the art.

3. Encryption is not a panacea

Most people agree that encryption is ‘a good thing’.[25] The GDPR can be said to encourage it.[26] The typical advice is to ensure that all valuable information is encrypted ‘in flight’ and ‘at rest’:

·       information encrypted ‘in flight’ – as it is transmitted between Alice and Bob[27] – should be safe from interception or eavesdropping; and

·       information encrypted ‘at rest’ should be safe from decryption by any third party who may have gained unauthorised access to that information.

Repeatedly, though, practical flaws in the implementation of encryption prove that it is not a panacea.

Weaker encryption algorithms can be defeated by ‘brute force’,[28] particularly when combined with known flaws in the algorithm, or pre-computed helper information known as ‘hash tables’ or ‘rainbow tables’.[29] For example, it is possible to hack many older Wifi networks in minutes; newer more secure Wifi protocols are also vulnerable to brute force attacks when the computing power provided by cloud computing is put to the task.[30]  

Even encryption strong enough to withstand these methods is still at risk of compromise. It is not just the strength of the encryption that matters, but the security of its implementation too.

Most encryption on the internet is done using private encryption keys saved on the relevant server. This is by design: the server needs to be able to decrypt and read the data transmitted to it.

Consider the famous Heartbleed bug.[31] There, a simple bug in the programming of the popular encrypted communications protocol, known as SSL, allowed unauthorised users to gain access to the private encryption keys saved on the relevant server.

This bug enabled the potential for man-in-the-middle attacks. Anyone with the stolen private key could pretend to be the relevant server without suspicion, and therefore intercept even encrypted communications.  

Given that the SSL protocol provides the encryption that protects ‘https:’ – as used by almost every online bank and e-commerce site on the internet – the implications were significant.  

Under the GDPR a data controller who suffers a breach which is a high risk to the rights and freedoms of natural persons must notify the breach to the data subject without delay.[32] Encryption of the relevant data may negate the need to notify,[33] but care must be taken: is there evidence that the encryption key was stolen too? How can you be sure? The Heartbleed bug was present in a huge number of online systems from 2011 but not reported until 2014.


This article has sought to introduce the reader to certain risks inherent to IoT, both technical and legal. This is an interesting and evolving territory. Malware is already targeting devices such as ‘routers, smart thermostats and smart dryers’. [34] Attacks are often compounded to maximum effect and improper implementation of IoT may create a disproportionate risk. I hope never to have to read about a major data loss that was perpetrated via the smart-fridge in the canteen.

Chris James ( is a senior associate in the Corporate practice of Paul Hastings and is based in the firm’s London office. Chris has advised a number of regulated enterprises on data breach response, including in respect of the loss of millions of records of customer data, ‘hacking’ of insecure APIs and loss of backup media. Prior to law, Chris worked as a systems and web developer.

[1] Gartner Says Worldwide IoT Security Spending to Reach $348 Million in 2016, 25 Aril 2016,

[2] The Article 29 Working Party’s Opinion 8/2014 on the Recent Developments on the Internet of Things (‘Article 29 WP Opinion on IoT’) sets out three similar examples of early IoT use-cases: wearables, quantified self and home automation.

[3] There are a panoply of relevant laws. For example, where IoT is used for financial services –not inconceivable especially in the emerging sphere of blockchain – it will be important to protect against cyber-risks as part of implementing adequate risk management systems under relevant financial services law. Telecommunications providers have existing requirements, and both of those sectors – as well as certain additional enterprises – will be impacted by the requirements of the forthcoming Network Information Security Directive (‘NIS’). For example, certain entities regulated under NIS are expected to be under a positive obligation to prevent and minimise the impact of security incidents, and comply with ambitious security notification obligations. In the future, this author expects to see particular cybersecurity requirements in laws introduced to address many different emerging disciplines, for example ‘driverless cars’.

[4] See the EDPS’s comparison of various versions of Article 30 of the (then) proposed GDPR at

[5] Art 31(1) GDPR

[6] 95/46/EC

[7] Art. 32(1)(d) GDPR

[8] Whether acting as controller or processor, Art. 30 GDPR.

[9] Art. 33(1) GDPR

[10] Art. 25 GDPR

[11] Firewalling, secure configuration, user access control, malware protection, patch management and security auditing, for example, are all as important for IoT devices as any others. Anecdotally, and according to a number of sources such as the IoT search engine, many IoT devices to date many have been found to be running poorly secured firmware and operating systems, to have been left running out of date and insecure software, to be incorrectly running without a firewall, and/or to be running with factory-default usernames and passwords or even with no passwords at all.

[12] Art. 35 GDPR.

[13] This is a simplified analysis, as in fact Microsoft Windows does run on IoT devices, as do other common operating systems such as Linux, but in much modified forms such that they appear very different from a typical enterprise desktop installation. See:

[14] IoT security software is an emerging sector in itself, and it will be interesting to see how technologists develop ‘the state of the art’.

[15] My summary of paragraph 4.6 of the Article 29 WP Opinion on IoT.

[16] It is therefore important that legal departments work closely with both the information security and procurement departments in purchasing IoT devices or services, from the earliest possible stage.

[17] For example, IoT devices may not be powerful enough to quickly run complex (e.g. floating point) mathematics, or may have other hardware limitations, making implementing strong encryption challenging. Other techniques, such as pseudonymisation and data minimisation, may be more appropriate.

[18] Even if updates are provided, enterprises should pay constant vigilance to ensuring such updates are properly deployed.

[19] An enterprise IoT customer may consider conducting its own security audit and/or penetration testing of prospective IoT devices before committing to purchase and deploy them.

[20] Attempts to retrofit security are often hampered by lack of mass-adoption.

[21] See the following blog post, of respected security professional Bruce Schneier, for a more nuanced discussion of this point:

[22] Dyn Research’s note is particularly eye-opening:

[23] The researcher delivered ransomware. Consider by analogy the risk of ransomware to an enterprise’s use of IoT in its supply chain, or to telemedicine IoT used by vulnerable patients.

[24] Some of these reports have been disputed, but reporters still typically recognise the risk:!/d/d-id/1323759

[25] Even, contrary to some reports, the UK government:

[26] Art. 32(1)(a) GDPR.

[27] Well known placeholder names commonly used in discussions of cryptography.

[28] The process of working out the encryption key by trying different keys – words from a dictionary for example – sequentially.

[29] See:

[30] Thomas Roth, Breaking Encryption in the cloud: GPU accelerated supercomputing for everyone, 2011

[31] See:

[32] Art. 34 GDPR

[33] Art. 34(3)(a) GDPR

[34] See: