Quantum Computers form a very real threat to online security. Most of
today’s assumed-safe mechanisms are suddenly outdated. This is how the
InternetWide Architecture tackles these threats.
Quantum Computers form a very real threat to online security. Most of today’s assumed-safe mechanisms are suddenly outdated. This is how the InternetWide Architecture tackles these threats.
To quickly recap the threats that Quantum Computing imposes:
Secure authentication may be impossible in 10 years;
Digital signatures on digital content may not be trustworthy in 10 years;
Things we encrypt today may be decrypted in 10 years from now.
The reason we say “may” is just because it may take 15 years instead of 10 before Quantum Computers hit us; but the funding is there, and it is no longer fundamental research but rather in a phase of technology being refined.
Take special note of the third point; this impacts our current and past actions and indicates that we should get moving as soon as possible, not in 10 years from now.
One major concern that hits everyone, if we look at today's practices, is that any password and any credit card number that we send to a "secure" web server will become readable under the regime of quantum computers. Both practices, the "authorisation to enter" and the "license to withdraw" are cryptographically poor, as they are founded on fixed "secret" words, and their technology has expired; they may still be practical in serving the masses, but they are lingering traces of a naive past and need to be replaced.
What Crypto Algorithms are hurting?
Most cryptographic algorithms that we use all the time will be carried to the cryptographic graveyard: RSA, DSA, Diffie-Hellman. Not just the modular-exponentiation forms, but also the elliptic-curve varieties. Algorithms have been devised that can crack the fundamental assumption underlying these algorithms given a quantum computer to run it on.
The algorithms are not poorly designed; they have just been designed for the “normal” approach to computing. Quantum Computers change the game completely, by effortlessly performing a very large number of computations at the same time, weeding out impossible solutions as they go. This is not unlike a string on a guitar, which produces the most idiotic vibrations when it is plucked, but quickly stabilises on just those wavelengths that fit in the string.
What Crypto Algorithms will replace them?
There are several other algorithms that would resist quantum computers. They are less efficient than the popular ones, however. Or they lack properties that we desire.
For one, message digests and symmetric encryption are thought to remain being secure. They use the same key on both ends, however, so they lack the property of public-key crypto, where anyone can learn about a public key and only one person has access to the opposite private key. Perfect algorithms where they can be used, but use-cases that cross over realm boundaries (such as opening a secure website) is not covered yet.
Secondly, there are (sometimes very old) algorithms such as those based on lattices or other primitives but not modular exponentiation. These sometimes require long keys, long message formats or they are founded on state being stored, which has made them less popular than the algorithms now under attack. But these are at least public-key algorithms.
Intermediate forms also exist; public-key signatures based on hashes, for instance. Some of these can be small, but they may need a storage space to keep revolving the keys. That can be unpractical, for instance when the key is used in different locations.
What we have not seen yet, is a Quantum Computer proof replacement for Diffie-Hellman or its Elliptic-Curve counterpart. These algorithms are important in current-day cryptography because they let us have Perfect Forward Secrecy, a property that isolates the keys used in different sessions.
Work is being done in cryptographic circles and standards bodies to select algorithms that should replace the old ones. These bodies are well aware of the danger to current-day encryption.
TLS to Encrypt the Internet
Most protocols use TLS as their basic encryption mechanism. It is safe to assume that this protocol will be among the first to be updated. This protocol secures websites, email and many other protocols, so at least the basic use cases will be remedied at some point.
As indicated, the most pressing concern is encryption. This may not be clear to all service providers, making some wait with upgrades until Quantum Computers arrive. These providers will be too late, and you are warned to leave them to the benefit of those that do upgrade quickly. Such transitions may be forced somewhat with upgrades in client software that invalidate such running-behind services.
SASL for Authentication
Authentication is a second-level concern, but it is still a concern. Most protocols use SASL authentication which, if implemented well, is a pluggable interface to a plethora of authentication options. Most SASL mechanisms are hash-based, including the strongest ones today (the SCRAM-*-PLUS mechanisms).
The web is the only protocol who thinks they are too clever to be standardised. What makes it simple to develop on, namely its lack of semantics, is also what makes it difficult to do well. That is, if the web is to remain open.
The lack of semantics has led to a trend of vendors providing both client and server side of the connection, also known as “complete control” and generally not in the user’s interest. Think of “download our App now” for mobile platforms and “use our online application” for desktop usage. In general, it is a lock-in paradigm. Nice for providers, not so nice for users with the exception of superficial aspects such as ease-of-use.
Unfortunately, these lock-in paradigms are the only ones capable of change these days. There is absolutely no tendency in the web world to migrate towards an integral solution for proper authentication or other form of security. Running over TLS does some good, but certaintly not enough. Passwords embedded into websites have always been a bad idea, and they will probably be among the first aspects being cracked.
The web has us worried. It may die under Quantum Computers.
InternetWide Vision on Kerberos
We use Kerberos as the foundation for our security. This is for various reasons.
Kerberos is well-integrated and easy to use. Once a day, each user logs on and for the remainder of the day they can access anything that they are eligable for. The user will not notice Kerberos during his day of work.
Kerberos is secure from Quantum Computers, simply as a result of its choice of algorithms, which are symmetric.
Kerberos is very efficient. Not just because symmetric algorithms are much faster than public-key credentials, but also because there are caches at clever places, allowing the reuse of gained credentials for the duration of a (day-long) session.
We have found mechanisms that enable the use of Kerberos across realms, even to a level that can connect the Internet as a whole. There is sufficient room for pseudonymity in our identity model that can be mapped onto Kerberos as well, so as to control one’s privacy when hitting upon remote services.
InternetWide Vision on TLS
We believe TLS should be as light as can be, so it is easy to use everywhere. Security is great when it is automatic and unnoticed. Our strategy towards TLS comes in a few choices:
We prefer to isolate TLS aspects in a daemon that is isolated from the actual clients or services using them. This is what inspired us to develop a TLS Pool that can be updated without turning to a plethora of applications.
We prefer to fill a TLS Pool from a cenral infrastructure, that can be upgraded to new crypto algorithms with on quick swoop. This is our IdentityHub approach.
We aim to add options to support Quantum Computing in the TLS Pool, and aim to switch the defaults from off to on as soon as this is possible with existing software. We defined three levels at which this could be done (authentication, encryption, handshake privacy) to allow us to do this with maximum control, rather than all at once. We shall add options to the validation expressions to allow more dynamic checks of this nature on a TLS connection that has already been setup, so administrators can exercise fine-grained control over their peers.
We prefer to provide any keys to users from a managed/central location. This underpins our Remote PKCS #11 mechanism, with a key repository hosted under personal control.
We have developed a cipher suite for TLS that caught us by surprise as being protected from Quantum Computers! The reason is that we integrate Diffie-Hellman with Kerberos, and we hash the session key from the Kerberos ticket into the key agreement with Diffie-Hellman.
We are working on a mechanism named KXOVER that allows Kerberos tickets to be agreed between independent realms. As recently decided upon, this will run over TLS and thus depend on to-be-selected Post Quantum algorithms. These algorithms can demand very large messages, as far as we are concerned; the exchanges are going to be rare and will only take places once in a number of weeks for a pair of realms that wish to be connected. Individual client-to-service connections will benefit from the much higher speeds of Kerberos exchanges, when they pass through TLS-KDH. Crypto does not have to hurt!
InternetWide Vision on SASL
We think of SASL as a vital mechanism for today’s authentication. It is virtually everywhere, and can be easily expanded.
SASL should be used whenever possible. One way we like it is with GSS-API running Kerberos, so as a mechanism to integrate with our security foundation of Kerberos. Alternatively, SASL EXTERNAL can be used to look at the protocol context, which usually means TLS. A client certificate (possibly from a managed repository over Remote PKCS #11) or Kerberos ticket (passed over TLS-KDH with or without psuedonymity) can be used quickly at this point.
Modern SASL mechanisms allow an authenticated user to step down to an authorisation identity. That is, instead of using your full name you might use an alias or a role, or perhaps talk on behalf of a group. This can help with privacy, especially when users can see each other’s identity. Pseudonymity shines here; users are identified but not necessarily by their primary (login) identity and not necessarily related to a communication identity or other identities that they may be using.
We believe HTTP should adopt SASL. It has been accepting individual mechanisms, but this is not helpful because it is not reaching the right people. Authentication of HTTP applications should move out of applications and into the HTTP layer. We also believe that channel binding, that is linking the surrounding TLS connection in with the authentication, should at least be available as an option.
We believe that SASL can arrive on a remote service and be passed back to a home realm for validation. All the remote service would need to know is that it is indeed talking to the home realm (a domain name, basically) and that this agrees to an acclaimed user identity. This requires specifying a domain name, which is easiest when using email-style user names like
email@example.com spell out the domain that should validate the user. This is not trivial, but it is expected to be possible and useful. Most importantly, it turns SASL into a realm-crossover technology for BYOID.
We believe it may be helpful to have mechanisms that can pass safely through an intermediate service, through the Diameter relay system. SASL was not designed for this, so either encryption should be added, or one of the few systems that can safely pass through an intermediate should be used. At present, only GSS-API comes to mind, as well as SRP which is expected to fail under quantum computing. The addition of public-key crypto which is resilient to quantum computers may be a helpful extension to the SASL portfolio of algorithms.
InternetWide Vision on Document and Email Security
Not everybody uses digital signatures on their email, or encryption to conceal the content from others than the intended recipient. These usage patterns do serve important uses to some, however, and they are an intrinsic part of the liberal openness of the free Internet. (Some might say that the technology could be abused, but for those of bad intention this is probably more than they need; there are plenty of clever tricks that can easily conceal online behaviour, with or without cryptography.)
The technology for securing documents and emails comes in two variants: The X.509 public-key system and OpenPGP. The former tends to be part of government attempts to provide civilians with certificates, the latter is what is actually being used by people involved in security and privacy. Both of these technologies can be modified with "plugin" algorithms that would be protected from Quantum Computing. It is reasonable to expect these mechanisms to develop in that direction.
The InternetWide Architecture uses open technology precisely because it has this healthy tendency to develop. We do not believe we can outsmart these algorithms, and will simply abide our time, and integrate them soon after they arrive.
This technology is more in demand than people realise. It is common for identities to be stolen; especially monetary institutions and governments are under constant attack. Consumer programs make it a habit of telling people what hints may indicate possible intrusions, but this always comes after the fact, because intruders develop to overcome such "challenges". The one challenge they could not overcome is to send email that can be authenticated as coming from the claimed sender; that is, by using a signature with a key that the consumer can trace back to the expected sender. As soon as the online behaviour of banks and governments signs their email, consumers can finally start validating their email and the assaults should stop instantly. Signatures with old algorithms can be relied upon until the first realistic Quantum Computers arrive, so their future announcement is no excuse to defer this introduction of digital professionality.
Building This Vision
This set of measures ought to get our projects beyond the problems that we need to face due to the rise of Quantum Computers. With the given adoptions these would offer generic support. This is why we embrace open protocols and open frameworks, instead of inventing it to shield off our own niche; we would not want to have to defend something with the vague "we know what we are doing" or "you can trust us" references that invariably follow from closed solutions that proclaim to be secure.
Our project currently focusses mostly on the protocols and identity systems. For those, we are building the vision above as we speak. We have a small team, but are always interested in hearing from you if you would like to connect your software to our setup. Talk to us if you want to know if and how this would be possible!Go Top