Cryptography, and asymmetric aka public key cryptography in particular still radiates an aura of mystery and confusion. Public key cryptography is far from bleeding-edge though, as it was discovered in 1970 by James Ellis, a British cryptographer working for the GCHQ. In typical British fashion, the discovery was kept secret from the public and shared only with the NSA. However, thanks to the independent efforts of American researchers including Whitfield Diffie, Martin Hellman and the RSA trio, a general-purpose asymmetric key cryptosystem was published already in the late 70s.
The next 40 years have brought faster algorithms and resistance to side channel attacks to the table, but I feel that by far the biggest change has been the mass adoption of TLS, and with it asymmetric cryptography, on the Internet. But now we’re getting a bit ahead of ourselves; let’s first take a look how both symmetric and asymmetric crypto work.
Symmetric vs asymmetric encryption
A symmetric cryptosystem uses one key for both encryption and decryption of the data. Somewhat confusingly, this key is called the private key. Any device which possesses this private key can read any data sent and impersonate others. Its advantages include low CPU load, easy offloading, and the fact that the key is never sent with the data. Therefore, barring weaknesses in the algorithm used or implementation mistakes, the chance of data being decrypted by a passive observer is zero.
The downsides include the lack of non-repudiable digital signatures (it is possible to claim that somebody else sharing the private key sent the message) and the key transportation problem: Anyone who can eavesdrop the private key in transit can compromise all past and future communications between that parties share the same private key.
In contrast to symmetric encryption, where only a single key is used, asymmetric encryption works with key pairs. A key pair consists of two mathematically related keys: A private one and a public one. The confidentiality of an asymmetric cryptosystem is not compromised if a third party gets hold of any public keys, and as a matter of fact in many use cases it is advantageous to deliberately spread them to as large audience as possible. The private keys are just as critical to keep secret as in symmetric cryptosystems, but with one important difference: In asymmetric cryptosystems, there is never need to transfer a private key to another party. This transforms the key transportation problem into a key management problem.
Asymmetric cryptosystems, often called Public Key Infrastructures or PKIs have one more important advantage that any symmetric keys and shared secrets, such as our social security numbers, lack: It is possible to verify that a party possesses the private key corresponding to a known public key, without divulging the private key to anybody else. Its main disadvantage is increased computing load: Asymmetric cryptography is orders of magnitude slower than symmetric. Therefore, in real life asymmetric cryptography is often used to securely exchange (permanent or temporary) symmetric keys between parties.
PKIs can often be confusing. In Figure 2, both Bob on the left and Alice on the right have generated their key pairs. The next step is exchanging their corresponding public keys.
Sending an encrypted message requires having the recipient’s public key. Since the public key can be freely shared between any number of people/devices, this doesn’t present the same kind of key sharing problem as a shared private key would. In Figure 3, Bob encrypts a message using Alice’s public key. After Alice receives the encrypted message, she and she alone can decrypt it using her secret private key.
Asymmetric cryptography can be used to digitally sign a message, offering authentication in addition to confidentiality. In Figure 4, Bob digitally signs a document using his secret private key. After receiving the message, Alice can verify the signature with Bob’s public key. This ensures that the message hasn’t been modified on route, and in addition Bob can’t later deny sending that exact message.
The above two operations, encryption and digital signing, can and are commonly combined. As an example, Bob would first encrypt a message using Alice’s public key and then sign the encrypted message using his private key. After receiving the signed and encrypted message, Alice first verifies the signature using Bob’s public key and then decrypts it using her private key.
Certificate pinning and other considerations for devices
If the hardware resources permit, PKI-based identities should generally be preferred over shared secrets as device identities. These device identities should hopefully be combined with pinned (=hard-coded) server certificates.
I use the word “hopefully” with heavy emphasis here, since there seems to be a misappropriate fear for exceedingly rare but potentially catastrophic events, even at the expense of common and well-known risk factors. It is shocking how many mobile banking apps are trivially MITM-able (man in the middle), since even the banks didn’t see fit to use pinned certificates. There is no limit on how many certificates an app can pin.
Scaling up certificate pinning from individual apps to the whole Internet gave birth to a standard called HTTP Public Key Pinning, or HPKP in short. HPKP is simply a fantastic thing. Finally, as a service provider I can protect my users against rogue or coerced Certificate Authorities (CAs) and together with HTTP Strict Transport Security (HSTS) can build a solid foundation to secure my users’ privacy. Except that on Firefox HPKP is crippled by default for the sole purpose to allow man-in-the-middle attacks by your employer/anyone else who had access to your device for a minute*1. Chrome currently support is but has publically announced their intention to deprecate it in May 2018. Safari, Internet Explorer and Edge up the ante by not even claiming to support HPKP.
It took a while to swallow that it is late 2017 and I can’t properly secure my own home automation web interface without writing an app for it. Back to a VPN-based solution then, but sometimes I truly wish that software vendors would be liable for the security of their products just as vendors of physical goods are. The GDPR will actually address some of these concerns starting from May 2018. However, the way the things are today, I’d better just follow my own advice: When setting on the path for a PKI-based solution, no matter if it’s for securing the service end or the device accessing it, do not despair. Concentrate on the most common use cases first, instead of the pathological ones.
The classical approach is to utilise various PKI schemes to fill the gaps left by symmetric keys by providing either privately or publicly trusted root and utilising a key pair consisting of a public key and a private key instead of a single shared secret. This effectively transforms transport security, how to protect the shared secret in transport, to a key management problem: How to get the private key securely into the device in the first place.
If possible device cloning is not an issue, the key pairs could simply be created by the device on first use/first connect and the public key transferred over secure transport (TLS/VPN) to the backend servers for signing (please remember to pin the backend certificates or at least restrict the accepted CAs!). The device serial number(s) can be sent alongside the public key or embedded in the certificate signing request in order to match a key pair to an immutable device serial number.
As always, this straightforward way has its advantages and disadvantages. PKI offers cryptographically strong identities, created without any end user interaction. The resulting certificates do not contain any personally identifiable information (PII) unless specifically embedded there, and therefore are much simpler to handle once the GDPR is in force.
The downsides include the many potential pitfalls in creating and managing a public key infrastructure, and a problem shared with symmetric keys, how to store the private key securely.
Public vs private roots
One important decision to make is whether to use a publicly trusted root or a private root. Publicly trusted roots are simply those root certificates that are included by default on various packages, such as Windows’ key store, ca-certificates for Linux and some browsers (Mozi… I mean Moz://a uses their own key store).
If leaning towards publicly trusted certificates, one thing that has to be carefully considered is that the Baseline Requirements, which all publicly trusted CAs have to follow, set strict limits on certificate validity times and the trend is towards months from a couple of years. While I think that this is the right direction for the Web PKI, this presents serious challenges for IoT use cases. Many commonly used tools and libraries correctly refuse to work with expired certificates and one has to carefully plan and test what happens if a device tries to connect to the service with an expired identity. The behaviour might change with new software revisions with little warning!
Private roots offer the freedom to customise the entire PKI for the use case(s) at hand. The other side of the coin is that maintaining a secure PKI is not for the faint of heart. Therefore, it is not surprising that many commercials CAs offer hosted private PKIs as a service and they should be considered.
Storing the symmetric or private key securely boils down into two points:
- Can the keys be embedded securely during manufacturing?
- How much extra cost the threat model warrants?
These two are very important points. For example, when designing the original Xbox, Microsoft answered “1. Yes. 2. Not much.”.
In order to save costs, the key was not buried inside a custom CPU, but transmitted over a bus from an external chip. Nevertheless, it was deemed “unhackable for a decade” since there wasn’t any way to sniff the keys from its very high-speed HyperTransport bus. Critically, here “any way” actually meant “any way to do it relatively cheaply using commercial on-the-shelf hardware”. It took one brilliant PhD student, Andrew Huang, few months with a well-equipped lab to eavesdrop the key and the whole system was soon decrypted. Keep Moore’s Law in mind no matter how dead you consider it to be.
TPM modules and PUFs
If the keys can’t be securely embedded into the system during manufacturing, there are two more secure options left when compared to placing the key on flash/EEPROM, the digital version of placing a post-it note under your keyboard. The first is to utilise a TPM module. TPM modules are basically self-contained secure microcontrollers that are specialised in cryptographic operations, running a custom OS. They interface with the main operating system only via a limited interface. Their strength is that they can generate the key pair internally, never exposing the private key even to the main OS. In addition, common cryptographic operations can be offloaded to the TPM module, which might be an important feature for resource-constrained devices, since public key cryptography can be more mathematically intensive and the key sizes much larger than symmetrical keys with corresponding security levels. The Achilles’ heel of TPM modules is two-fold: the additional BOM cost and very spotty software support. Support for standard libraries like OpenSSL is very flaky, and one might have to spend considerable effort to keep a custom fork up to date.
The other option is to go for PUFs, Physically Unclonable Functions. They might sound a little like science fiction, but have been well explored in academia since the 80s, and commercialised in the last decade or so. PUFs rely on minute manufacturing variations in integrated chips, such as SRAM and DRAM memory chips. If each device has a unique ‘seed’ built-in, it can be later used as a seed to generate unique keys for each device. The name can be misleading however, as some PUFs are actually cloneable and recent research points out successful attacks against them. The other concern is that they are both immutable and hardware-based, and therefore should any weakness be found, the only option is to replace the hardware (see Estonia’s National ID Cards).
As a reward for reading through this mammoth post, one final practical tip.
Don’t be tempted roll your own crypto or IAM. Both may look simple at first, but so does climbing to the Mount Everest – just walk uphill until you’ll reach the summit. If you still feel you can do it, at least check first how much the NSA would pay a world-class cryptographer such as yourself =)
Contact Us to learn how IAM can help secure your IoT solution or environment.
—
*1 in order to mitigate this vulnerability, change the preference security.cert_pinning.enforcement_level from the default value of 1 to 2
About The Author: Jesse Kurtto
More posts by Jesse Kurtto