Trust me, I’m a computer

January 23, 2017

Our computing platforms are expected to cater for a myriad of hardware, firmware and cloud dependent software applications to enable users’ needs to be met. The Internet of Things (IoT) is increasingly in demand as we look to leverage computing power through smart devices that can report back some useful monitoring information to allow analytics to be relayed or action to be performed upon a user’s request. And, as we look to the not-too-distant future, we can see that AI may result in the introduction of applications such as autonomous vehicles on our roads. With the increasing expectations we impose on computing platforms, can we really ‘trust’ our computers?

We take for granted that our computers are doing exactly what we tell them to do or that they represent what they appear to represent. We necessarily assume that current protections on our platforms and networks (including virtualized cloud platforms) are sufficient to guard against malware and so defend us against cybersecurity breaches. This is difficult in practice to achieve even with cybersecurity expert intervention and plentiful resources. Even with a computer platform that possesses a brand-new operating system, many software modules will need to be loaded even before the OS springs to life. If a rootkit has been surreptitiously installed on a computer platform, this may well be superficially undetectable by the user or, worse, an anti-virus application. This ‘layer-below’ attack is difficult to guard against and such malware may simply remain resident on the platform enabling an attacker to carry out unfettered cybersecurity breaches by gaining access to confidential information and exfiltrating it. However, malware need not invade the layer below if its intention is to invoke its payload immediately and thus cause immediate harm, for example, via ransomware downloaded and activated on a computer platform. There are architectural vulnerabilities that exist on computer platforms too, such as the privileged OS-level access given to device drivers, ie code written by third parties to allow a manufacturer’s hardware to work on an operating system. As a result, there may be justification for being uncertain about the state of our computer platforms or those other computers that we seek to engage with. How does the computer platform know something is not right if malware exists in whatever form? This is where ‘trusted’ computing comes in.

A key to implementing trusted computing capabilities is appreciation of the idea of a trusted computing base (TCB). This terminology was coined by the US Dept. of Defense in the so-called ‘Orange Book‘ (1985) as the ‘totality of protection mechanisms within [the computing system], including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy‘. In designing a trusted computing platform, all elements constituting the TCB must be identified. The TCB would incorporate platform components that can be controlled, such as the network card, CPU and chipset, boot loader and sound card. However, the aim is to keep the TCB as small as possible to aid manageability. Therefore, one would look to exclude from the TCB less intrinsic components such as a printer, keyboard, mouse, word processing application and the display.

So what do we mean by ‘trusted’?

Graeme Proudler, Trusted Computing Group chair, defines an entity as trusted if: (1) it can be unambiguously identified, (2) it operates unhindered, (3) the user has first-hand experience of consistent, good, behaviour or the user trusts someone who vouches for consistent, good, behaviour. Based on evidence reported from trusted computing platforms, platform owners and third parties can have increased confidence in their ability to make informed decisions.

What capabilities must we have to build a trusted platform?

While a trusted platform can have varying capabilities, generally, one would anticipate the following as being crucial to a good implementation: identity, protected storage, protected boot and trusted execution.

Identity has two main aspects. First, the platform should unambiguously identify itself via public key cryptography utilising a private (secret) key bound to the platform called the Endorsement Key (EK). Secondly, the identity and configuration of running software can be determined using cryptographic hashing of executing software, ie object code.

Most of the functionalities under a trusted platform are possible as a result of embedded hardware known as the Trusted Platform Module (TPM). The TPM stores the EK in a shielded and secure environment within the TPM and is known as the Root of Trust for Reporting. Through a process called remote attestation, the platform can report its identity, the EK, to a third party. However, were identity to be reported directly to those needing to make trust-based decisions in this way, this would inevitably lead to a breach of data privacy (ie a platform’s identity, and most likely the identity of the platform owner, would easily be compromised by aggregating the remote interactions associated with that platform). To prevent this, a layer of indirection is inserted into the relationship so that an intermediary privacy certification authority (Privacy CA) – conceptually similar to a SSL certification body – can inspect the EK but the third party making the identity confirmation request would only receive confirmation from the certification authority that certain, non-identifying, aspects of the originator platform are genuine – a process called direct anonymous attestation.

The Privacy CA can verify the platform’s identity by confirming that the EK is genuine, by checking the platform’s unique TPM digital signature against a list of all TPM credentials, allowing a positive confirmation only if there is a match.

Evaluating the identity of a platforms software configuration requires the creation of a chain of trust. If A à B à C à D is the chain of processes that are running on a computer platform, the object code measurement of all of A to D are required to be measured and then stored by the TPM. The chain of trust is initiated by A, called the Root of Trust for Measurement (RTM), which first measures B. If the value that exists in storage represents the last measurement for B that matches the measured value by A of B, it will pass control to B. Then B will measure C, check that the stored value is the current value, then pass control to C, and so on to build up the full chain. In reality of course this chain can be very long. There are checks involved in this process to ensure the integrity of the measurements produced. These stored measurement values can be reported and can contain significant information about the configuration of software executing on the platform (eg the hash of the object code for a software application of a specific version would be expected to be able to be identified as it would be the same as that for a version of the application on any platform it was executing).

Protected boot is a capability that extends the range of trusted protection on a computer platform from application level back to the first step in the boot process. While the chain of trust, if it commences at application level, will already be able to check the integrity of running software from the OS load onwards, by moving the RTM to the commencement of the boot process in BIOS, trusted capability is extended and, potentially, rootkit-type attacks are picked up and thwarted. Windows 10, for example, uses trusted platform technology for boot protection.

Measurement of entities forming the chain of trust, from boot processes to running applications, is invoked differently depending upon the type of chain utilised. With a static chain of trust, as used in the Windows 10 boot process, every element within the chain is measured upon a platform reset (ie a reboot). With a dynamic chain of trust, a platform reboot is not required to trigger measurement, it can be triggered on demand. For an IoT device, which may boot once and not potentially reboot for years, facing long periods on non-measurement, a dynamic chain may provide a solution. In terms of implementation, Intel TXT is a tool that enables dynamic chain of trust technology.

Computer platforms can benefit from the trusted execution of hardware and software resources (ie by erecting a ‘fence’ of sorts around these resources and allowing access only by following an appropriate platform policy). By doing so, the aim is to reduce the likelihood of an attack on one resource impacting other resources. In terms of hardware, the architecture of computer platforms can utilise memory ‘paging’ to isolate areas of computer memory in use by an application. In terms of software processes, Intel SGX is an example of a technology that can be employed to created encrypted enclaves in memory for an executing application. Intel SGX protection is robust as it utilises the TPM to store the encryption keys (which does not have read/write access as stated above).

The TPM provides a protected storage location for important information such as platform identity and chain entity measurements. For input values, such as chain measurements, the TPM utilises a cryptographic hash process within the Platform Configuration Register (PCR), to enforce confidentiality and integrity upon the values to be stored.

trust computing illustration

Details of each measured event within an event structure are then  attached to the Stored Measured Log (SML). The SML is external to the TPM. It provides a resource that is accessible to a verifying entity, for example an entity within the trust chain that wishes to check the integrity of the next entity in the chain before making a trust decision to pass control to it, or it could be a remote verifier.

How does the verifier know that the values in the SML represent the PCR values and not some ‘tampered’ chain measurements designed to trick the verifier into believing they are genuine? The verifier can validate the integrity by calculating the expected value from each value within the SML. The value of all extends in the SML must, added up, match the single PCR value obtained from the TPM and, if so, this will represent the measured chain.

The TPM has limited actual storage space, approximately 25K. Clearly, any other sizeable data on the platform cannot receive the shielding benefit provided by storage within the TPM directly. However, there is another way in which protected storage is provided for data outside the TPM, using the TPM Root of trust for storage (RTS) which is stored in non-volatile (persistent) TPM memory in a way that ties that data to the platform.

The RTS key (under TPM 1.2 this is a 2048-bit RSA-generated key) is permanently enclosed within the TPM and can be used to, for example, encrypt a storage key (normally keys that encrypt data, rather than large chunks of data itself which would not be efficiently handled using the TPM hardware capabilities) used outside the TPM to produce a Sealed Blob.

This procedure uses asymmetric cryptography. Essentially, the process loads the storage key into the TPM and the TPM decrypts the storage key (which requires the RTS) revealing the key; the TPM Seal command combines the data, the policy for authorization and uses the public key portion of the storage key revealed in the TPM and proof for a HMAC calculation.

The resulting Sealed Blob is sealed to the key that performed the encryption and this platform-based (unique TPM) binding is particularly strong because of the proof used.

Trusted technology in practice

Trusted computing is not new. While there has been resistance to its ideas, uptake and industry-traction have improved in the last few years in large part due to the embedding of TPM in millions of computer platforms shipped worldwide (the technology has, however, frequently been turned off by default until recently). Additionally, trusted technology is utilised in popular Windows operating systems, Windows 8 and 10, to safeguard the boot process.

In terms of implementing privacy, one of the risks data controllers and processors need to mitigate is that of data breaches following employees leaving unencrypted notebooks unguarded or losing them for whatever reason. A significant penalty is likely to be imposed on the UK data controller by the ICO where data encryption at rest is not applied in such cases. A neat solution using trusted computing is Bitlocker, a Microsoft Windows application that uses TPM technology to implement drive encryption. If an attacker manages to procure a stolen laptop with Bitlocker activated, the hard drive data will be encrypted.

Trusted computing may have particularly useful application in IoT and autonomous vehicle computing platforms. With the sophistication of malware increasing (think ‘Stuxnet’ worm), in addition to the large attack-surface of such systems (such systems may be physically more accessible and likely to be manipulated by an attacker), it is no longer sufficient for embedded software systems to be assumed to live in isolation. Further, there are economic drivers to incorporate trusted technology into such systems by offering a reduced cost for managing deployed systems by enabling safer network implementation, including a more seamless firmware or software updating capability.[1] Being able to readily measure the state of a computer platform, and identify when the state has strayed from that expected, will enable manufacturers and platform owners to be better equipped to manage cyberattacks on such systems.

Digital Rights Management (DRM) via trusted computing is of particular interest to digital media suppliers. The idea here is that the content owner is protected against malfeasance on the trusted platform by the platform owner. Using the remote attestation capability, a content owner may enforce conditional access on the content (eg no-replication, time-limited licence, platform-only access, etc) on the platform. However, this does not need to be ‘Hollywood’ content –  this can include pictures, records or other files. While this may not be a panacea for DRM, trusted platforms are likely to provide much stronger rights management. There has been resistance in certain quarters to trusted computing as a consequence of the DRM possibilities, leading to the unfortunate renaming of trusted computing to ‘treacherous computing’ by Richard Stallman, found of GNU (the free UNIX-like operating system). However, Stallman concedes that the TPM module without DRM is harmless in a 2015 update to his initial post: ‘…we conclude that the “Trusted Platform Modules” available for PCs are not dangerous, and there is no reason not to include one in a computer or support it in system software.

There are data privacy implications in trusted computing which will need exploration if it is to become widely accepted. The process of remote attestation could lead to potential data protection breaches if not carefully managed. If a verifying party wants to receive evidence about a platform prior to making some form of trust-based decision about a platform, the idea is that the indirection produced by the introduction of a Privacy CA should mean that only pseudonymised data is sent onwards, with no identifying information. However, this is not trivial. In fact, the trusted platform through remote attestation can potentially report on a wide range of information about the platform which may no longer remain non-identifying if aggregated. This means that the platform user could potentially be identified even without the supply of any obvious personal data. It is unclear what safety mechanisms will be used to ensure the platform owner will be provided information, or be entitled to give or withdraw consent, in relation to any processing. There are also concerns that need to be addressed about what sort of decision-making powers the Privacy CA may have, such as blacklisting ‘bad’ platforms, and the extent to which it performs any tracking or profiling of verification requests and identity confirmations.

Trusted computing is a branch of computer security that has the potential to provide substantial ammunition in the cybersecurity armoury but will require careful privacy management. For its use to become more pervasive, remote attestation uses (such as DRM) need to be addressed. David Grawrock, former Principal Engineer and Security Architect and former chair of the TCG at Intel quotes Isaac Asimov (The Intel Safer Computing Initiative, Building Blocks for Trusted Computing, 2006), ‘Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest‘. This is precisely the goal of trusted computing.

Manish Kumar Soni is a Solicitor & Notary (CIPP/E) at ClaydenLaw (www.claydenlaw.co.uk) and Reader MSc Software & Security at the University of Oxford


[1] ‘Architect’s Guide: IoT Security’ (2015) and ‘Secure Embedded Platforms with Trusted Computing’ (2012), Trusted Computing Group