Client-side software is not a locked box; it’s open to reverse engineering. In the first article in this series, we looked at the dangers presented by in-app cryptography.
Thankfully, the problem isn’t new. Technologies exist specifically to mitigate the risks and allow in-app cryptography to be used safely. Let's look at the problem of distributing a secret key in your app, while trying to keep it secure from attackers.
The APIs that operating systems (OS) provide for managing cryptographic operations, and the functionality they offer, are often called “keystores”. This article will discuss keystores in more detail, as well as three technologies they commonly use to implement secure in-app cryptography:
A keystore provides a container for an application to store and generate cryptographic keys and to use those keys in cryptographic operations.
The implementation will depend on the OS and often on the hardware hosting the OS. This makes it difficult to know just how secure a keystore actually is. In some cases it will use a secure element to implement the keystore, others will use a Trusted Execution Environment (TEE), while in many situations the implementation will all be in software with little more than a SQLite database behind it.
Some keystore APIs provide for querying how the keystore is implemented, though in a world where software can be manipulated, we need to be careful when trusting OS calls for security critical operations.
The real challenge with keystores is that they are positioned as the “trust controller”. Really this is a necessity due to any keystore acting as a third party component to the app.
This statement makes more sense when you look at how the cryptographic keys are provisioned into a keystore. Usually, you are only given two options - either the keystore generates the key for you, or you can load a wrapped key into it. (A wrapped key is just a crypto key that has itself been encrypted.) Given that a keystore will be the entity generating the wrapping key, in all cases it is the keystore that dictates how keys are generated and provisioned.
This means a keystore is a useful aid to support cryptographic operations that live entirely on the device, such as encrypting local storage and device binding algorithms.
Keystores are more cumbersome to use when interacting with external services. They make it hard to use with a known secret key, as the keystore API puts dependencies onto the external service’s implementation.
With keystores we also have to be aware that the entry and exit point of each cryptographic operation is the calling software. That means our clear, encrypted data will be exposed within the application.
Therefore, to get the flexibility and security required, we may need direct access to the underlying security technology, rather than being limited to the OS-provided keystore API. As we’ll see below, the nature of this technology means that is not always possible.
The historic gold standard for protecting code and data running in hostile environments is the secure element. As such, it makes sense for this to be the first technology we look at to protect in-app cryptography.
A secure element is a tamper-resistant mini-computer, similar to the chip in your credit card. If you are lucky enough to have access to one, it enables you to install small programs called applets and safely execute that code, knowing that no-one can access it or its data.
Many consumer devices these days come with a secure element embedded into the hardware. For example, by 2019 a third of smartphones phones had a secure element, and that number is growing.
The secure element owner—which is typically the device manufacturer—controls the provisioning of it, through a requirement to cryptographically sign all software to be installed into a secure element. Only the secure element owner will have access to the carefully guarded signing keys. This strict control is necessary to maintain the security of the secure element.
Therefore, if you are the secure element owner, then you have access to a highly secure computing environment. This is a great asset when deploying security critical applications.
But if you aren’t the secure element owner, then what do you do?
Another option is Trusted Execution Environments, which are isolated areas in the main processor. These allow secure programs to execute without their instructions or data being visible to the general software running on the machine.
Superflically, TEEs and secure elements can appear very similar: they both provide a secure processing environment where access is controlled by the device owner. But, whereas a secure element is a separate tamper resistant chip, a TEE is part of the main processor. This makes it less costly to deploy, while giving the trusted programs a more powerful computational environment to work with. The tradeoff is that the security isolation is weaker.
This tradeoff makes a lot of sense for use cases like video DRM (Digital Rights Management). A secure element simply doesn’t have the computational power to handle video streams, while protecting the decryption keys is crucial to the success of any DRM implementation.
As with secure elements, access to a TEE is restricted to applications trusted by the TEE owner. In the DRM example, only “native” DRMs (those provided by the operating system) use TEEs.
If you aren’t the TEE owner, then what do you do?
The third technology involves doing everything within your own software.
White-box cryptography technology (“whitebox” for short) provides a means to run cryptographic—and other—algorithms securely in a pure software environment. It achieves this by combining code and data, including the crypto key, together using a range of computational and mathematical techniques so that the operation is no longer understandable by a hacker and they are not able to extract the cryptographic key. Through these clever mathematical techniques, no portion of the key is ever resident in any memory or CPU registers.
Even when an attacker is using traditional static and dynamic analysis, the operation and data remain protected.
Because it is a pure software approach, it’s very easy to deploy and is very flexible in terms of the algorithms you can run, even chaining together multiple algorithms so that complex operations can be performed without ever exposing data in the clear.
The security of mobile payments rests with protecting the in-app cryptography.
The industry realized very quickly during the early days of mobile payments that—unless you were Apple, Google or Samsung—getting scale with a secure elements approach was hard. While the payments industry was familiar with this hardware-backed security approach, there was no universal way to access a secure element in a phone. The solution was to switch from the hardware-backed security the payments industry was used to, and instead use an architecture underpinned with white-box cryptography to get equivalent security for the payment credentials. This satisfied the security requirements required by the financial services industry, and also meant solutions could be deployed at scale to end users.
The flexibility offered was very valuable. The crypto-architectures could easily fit in with the rest of the app, and because the developers were in full control of the crypto, they could design flows in such a way that sensitive data was never exposed in the clear, even when someone else had already designed the server architecture and protocols.
If someone was dynamically analyzing the app, they wouldn’t see the protected data.
These same lessons can be applied in other industries, which can take advantage of the road paved by the mobile payments industry—an industry which understands how to secure sensitive data out in the wild. Whitebox cryptography is already trusted there, so you can trust it too.
PACE’s technology is all about securing the software IP of our customers.
That includes providing software licensing services—meaning only authorized users are able to use a protected software application or plug-in.
A key part of building secure licensing solutions is to protect PACE’s own algorithms and IP. To do this, PACE requires software security way beyond traditional obfuscation. It also requires flexibility so the security doesn’t impact the usability of PACE’s products or PACE’s customers’ products. To achieve this, PACE developed its own white-box technology.
This technology has been productized into a 3rd-generation white-box called White-Box Works. This is a software development tool that PACE’s customers use to protect their high-value algorithms and data wherever their software is running.
Traditional white-box library products are monolithic and inflexible DLLs. This means they force developers to compromise on their creativity, while leaving them reliant on the vendor for updates. In contrast, 3rd-generation white-boxes like White-Box Works empower the developer to simultaneously serve multiple platforms, while tailoring their software architecture to match their use-case. This ultimately allows them to deliver far more value to the end user.
To learn more about protecting in-app cryptography, contact PACE to talk with our experts.