Cryptography and Quantum Computing with Cyferall

A practical overview

This article is intended to review how cryptography has evolved over time and the challenge presented to current cryptographic techniques with the arrival of quantum computing.

Introduction

Cryptography — for the layman it is a subject shrouded in mystery, viewed as complex and often involving a fair amount of intrigue. This is perfectly normal since it is all about secrets and secrecy. From the Romans’ use of the Cesar cipher to the role played by the Turing machine in WWII to decipher enemy communications and to the “red phone” encrypted communications that connected Washington to Moscow during the cold war, examples abound of how Man has tried to keep sensitive or personal information secret from prying eyes.

While these creative means of protecting secrets have been around for quite some time, entirely new techniques had to be invented with the arrival of the digital age. In the digital age, information takes on a generic form (0s and 1s known as bits), circulating at lightning speed through global networks using a path that may or may not be known and passing through the “hands” (or nodes) of entities that are not authorized recipients. In fact, the digital age introduced an entirely new reality for which specific cryptographic techniques had to be invented. For purposes of this article, we will fast forward to the advent of the internet, or more commonly known at the time as the ARPAnet. Even though the digital age as such came into existence much earlier, the danger to digital information grew exponentially when it became accessible through a network connection. Originally intended in 1969 to be a decentralized and fully redundant communication system that could survive cold war nuclear attacks, it became the means to exchange information relating to US government funded university research, then quickly evolved into a mosaic of connections worldwide that could access private, public and government servers, containing all type of digitized information, and using a standardized internet protocol (IP) communication protocol. The novelty of the information that could be accessed, the power that resulted from access to knowledge bases theretofore inaccessible and the general availability of the ARPAnet/ Internet to the general public, led to and continues to promote its uncontrolled and exponential use. Unanticipated levels of use of this new technology meant that traditional protective measures, applied to limit access to sensitive information, trailed dangerously behind. In fact, in 1978 the U.S. National Security Agency resisted recommendations by Professors Vinton G. Cerf and Robert E. Kahn to add encryption capabilities, directly into the TCP/IP communication protocols, that would later allow for the exponential growth of the internet. Ultimately, it is the Morris Worm of November 2, 1988, that highlighted the key security flaws of the internet design and the “buffer overflow” flaw identified by Robert Morris back then continues to be used today.

The arrival of the internet, therefore, created a new paradigm — how to protect digital information passing at lightning speed through an IP network made up of a mosaic of global nodes over which there is little, if any, predictability on the route.

A primer on Symmetrical Encryption

In the beginning, this problem was addressed by encrypting the data with a symmetrical encryption algorithm. Symmetrical encryption algorithms had been designed since the early 1900s, much before their availability on the internet, as their use was foreseen on electromechanical engines, and they now work in the following manner on computers:

More will be discussed later about the early work done around symmetrical keys and recent innovations relating to quantum resistant encryption.

Symmetrical encryption algorithms claim to have the property that they can only be broken by brute force, which means that they can only be broken by trying one by one each possible key until the right one is found. The length s is called the level of security of the algorithm. The original data encryption standard (DES) had a 56-bit key length and today its successor, AES, is standardized in two versions — 128 and 256 bits[1].

The 56-bit key was found to be susceptible to brute force attacks using several brute force techniques that were facilitated by design weaknesses and the exponential increase of computing power, known as Moore’s law. Ultimately, the 56-bit standard was abandoned and replaced by AES which was developed with the 56-bit weaknesses in mind. Therefore, given that with this new design there was no known technique for improving on the computing capabilities of a classical computer (i.e., its brute force capabilities) it was concluded that with a 256-bit key length, the number of key possibilities has an order of magnitude close to what is commonly estimated as the number of atoms in the universe. From this perspective, AES-256 can resist any brute force attack by a classical computer, however, we will see later that it can be also subject to indirect attacks and now, quantum computing attacks.

But in a world of constant information exchange protecting digital data through a resistant form of symmetric encryption is not the only challenge. Since the Symmetric Key encrypts and decrypts there is an obvious need to somehow communicate securely the Symmetric Key to the recipient so that it can be used to decrypt the data. Given the sender and the recipient will not be an office distance apart it means that the Symmetric Key must be transmitted by the sender to the recipient through the internet with all the network flaws, gaps, and back doors that we know … and there is the second challenge.

A primer on Asymmetrical Encryption

In 1976, at the American National Computer Conference, Whitfield Diffie and Martin Hellman publicly[2] presented a new encryption concept called asymmetrical encryption where two different keys are used for encryption and for decryption[3]. According to the concept of asymmetrical encryption, the key utilized for encryption (asymmetrical key) no longer needed to be kept secret since, in this concept, it cannot be utilized for decryption. The figure below shows the asymmetrical encryption process, with authentication of the sender, as originally described by Diffie and Hellman.

The principal asymmetrical encryption algorithms are Rivest-Shamir-Adleman (RSA), Digital Signature Algorithm (DSA) and Elliptic Curve Cryptography (ECC).

Without more information, this description of the asymmetrical encryption concept gives the distinct impression that it could outright replace symmetrical encryption, but this is not the case. In fact, asymmetrical encryption requires much more computing power than symmetrical encryption, and the computing power required increases significantly depending on the length of the message. As a result, in the 1980’s the use of asymmetric encryption as the sole method of encryption was reserved to large companies having the computing power necessary for exchanging long messages. The exponential growth of the internet and the constant exchange of information that is its sole purpose meant there was a technological challenge to be met.

In 1991, the American scientist Philip Zimmermann was the first to propose using a protocol mixing asymmetrical and symmetrical encryption to solve this computing power issue with the Pretty Good Privacy (PGP) software. With this software, encryption of the message itself is performed with a symmetrical fast encryption algorithm whose key is exchanged beforehand, the secrecy of this exchange being assured by an asymmetrical encryption. Therefore, the asymmetrical algorithm requiring higher computing power is only used for exchanging the key of the symmetrical algorithm, whose length remains much shorter than the message itself. PGP had solved the computing power issue and allowed universal access to asymmetric encryption to anyone on the internet.

All modern cryptosystems mix symmetrical and asymmetrical encryption algorithms, and it could be expected that this is sufficient to provide full protection for digital data but, unfortunately, this is not the case. Symmetrical algorithms remain the subject of indirect attacks, the software architecture of information systems present weaknesses when handling data in the clear and more concerning, classic symmetrical and asymmetrical encryption algorithms can be the subject of new kinds of attacks based on Quantum Computing.

With this background of information, let’s now review all the remaining weaknesses of current symmetrical and asymmetrical encryption techniques and the software architecture in which they are used.

Challenges to Symmetrical encryption

Symmetrical encryption algorithms, like AES, although resistant to direct attacks from classical computers, can be subject to a first category of indirect attacks, known as side channels attacks. When the same key is utilized for the encryption of several messages, an external statistical analysis of execution time or algorithm power consumption for many different messages can eliminate the variable part of the data set and retrieve the fixed part, namely the bits of the key. In the case of AES, this attack is known as a cache timing attack[4]. For the encryption of real time streams, like audio or video conferences, and if verification of the integrity of the data is required, AES alone isn’t fast enough and doesn’t provide integrity verification. To compensate, AES must be encapsulated in other software layers, such as Offset Codebook (OCB) or Gallois Counter Mode (GCM), providing integrity verification and allowing parallelized computation, but these additional layers introduce new weaknesses, opening the door to a second category of indirect attacks[5] [6] [7]. The risk of these indirect attacks could be mitigated but at the cost of using specific hardware processors or by significantly downgrading performance or by imposing other operational limitations, such as limiting to the amount of data that can be encrypted with the same key.

Weak IS Software Architectures

Another weakness of the current encryption eco-system is a consequence of the current software architecture of information systems. In an information system, data is always in one of the three following states: “in use”, “at rest” or “in transit”. Data is “in use” when created by a sender or rendered to a recipient and data “in use” must always be “in-the-clear” for it to be intelligible to its users. Data is “at rest” when stored on mass storage devices, locally or in the cloud. Data is “in transit” when moving from one computer to another, through local networks or the internet. Data “at rest” and “in transit” must always be encrypted to be protected against attacks. However, changes from one state to another occurs regularly and, when these changes in “state” occur, data is “in-the-clear”. The fact that the data “in use” is “in-the-clear” is obvious but this is also the case when data is changing from “in transit” to “at rest” and vice versa, as encryption keys are often different for these two modes and transcoding needs to be performed with an intermediate form of the data “in-the-clear”. Therefore, if the data “in-the-clear” during these changes in “state” is data that must imperatively remain secret, such as confidential messages and sensitive real time voice/ video streams but also symmetrical keys and private keys, the current IT software architecture does not protect the information during these changes in “state” and the only method available is to ensure that these changes in “state” occur inside protected enclaves.

Unfortunately, most information systems only provide encryption for data “in transit” and ignore their changes in “state”, considering that it is the responsibility of equipment/ computers within the network to protect data “in use” and “at rest” with firewalls, antiviruses, and operating system-based data encryption. But this type of protection is faced with new kinds of attacks that, for example, exploit the so called “0 day” weaknesses and, therefore, can never provide full protection. In fact, many messaging/ communication and videoconferencing platforms available on the market today claim to provide “end-to-end” encryption but this protection is only provided to data transiting from one “end” to the other “end” and these “ends” are often just browsers. Within the caches of these browsers, data is “in-the-clear” and a potential prey for viruses and malwares. When recording video conferences, data is often put “at rest” in a clear form and is again subject to eavesdropping. Moreover, the key management practices of these platforms claiming “end-to-end encryption” doesn’t necessarily provide guarantees of secrecy and sometimes these keys are known by the service operator who can then eavesdrop or allow a third party to eavesdrop. When high value data is involved, these software architecture weaknesses have led to the use of very expensive physical means: proprietary end user terminals in protected facilities, dedicated private local area networks also based in protected facilities and isolated from the internet and even extremely expensive large area networks intended to entirely replace the internet.

Quantum Computing — the final blow

Quantum computing capabilities will render classic asymmetrical encryption algorithms obsolete, breaking them in a matter of minutes, and existing AES symmetrical algorithms in a matter of weeks. For symmetrical encryption, using the Grover algorithm[8], quantum computing will provide ways to improve the efficiency of brute force key search, something classical computers have failed to provide so far. As a result of the Grover algorithm, for the AES-s encryption, the key space complexity is reduced from O(2^s) to O(2^(s/2)). Therefore, AES-256 will be no more resistant to quantum computing than is AES-128 in front of a classical computer. To restore the same level of security, the symmetrical encryption algorithm must at least have a 512-bit key.

For asymmetrical encryption the situation is even worse. Indeed, the Shor algorithm[9] enables a quantum computer to solve factorisation and discrete logarithm problems with an amount of resource (computing time and memory) that is a polynomial function of the dimension of the problem[10]. In asymmetrical encryption, the problem to solve is to find the private key when only the public key is known, the dimension of this problem is the public key length and for all current algorithms, this problem is either one of factorisation problem or a discrete logarithm computation. Therefore, the time needed to break an asymmetric encryption algorithm is only a polynomial function of the key length while for a classical supercomputer running the most performant cryptanalysis algorithms it is a sub-exponential function. For instance, with an asymmetrical encryption algorithm such as RSA that relies on factorization, the NSA recommends having a 3072-bit key. Indeed, it would take thousands of years for a classical supercomputer running a sub-exponential cryptanalysis algorithm to factorize such a key when it would be only a matter of minutes for a mature quantum computer, with enough Qbits, to perform this operation.

As early as January 2016, the American National Security Agency (NSA) issued an alert in the Commercial National Security Algorithm Suite and Quantum Computing FAQ[11] regarding the vital importance of encrypting data using post-quantum encryption. At the end of 2016, the NIST launched the Post-Quantum Cryptography initiative to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms[12]. This process led to the announcement in July 2022 that four candidates were selected for standardisation, one for encryption and three for digital signature. In May 2022, the American White House also issued two executive orders, the first one addressing the need to create the conditions for the US to lead in the development of quantum computing capabilities and the second enjoining all US federal services to prepare plans to protect their data against quantum computing[13]In other words, the race to develop mature quantum computing technology will accelerate and the threat to cryptography is considered immediate.

The financial consequences of classical encryption weaknesses, of the inherent weaknesses of most current software architectures in the face of attacks by classical computers and of the need to protect digital data are already enormous — Cybercrime has an annual global cost of over $6 Trillion (McAfee 2021) without taking into account soft costs such as unplanned down-time, breach investigation costs, productivity disruption, theft of valuable intellectual property that took years to create, brand / reputational damage, professional and personal Identity theft, cost of hacked organizations to defend against litigation … Because it is extremely difficult to trace, to prove and to prosecute, cyber-criminality spares no governments, no business segments, and no individuals.

Quantum computing will increase the risks and the costs by an order of magnitude[14] and will trigger an inevitable redesign of existing information systems, but it is also the opportunity to migrate towards post-quantum information systems that will address all data protection weaknesses and not just the ones directly linked to the arrival of quantum computing.

[1] In the mid-70s, the Data Encryption Standard (DES) was standardized by the American National Bureau of Standards (NBS) (renamed the American National Institute of Standards and Technology [NIST] in 1988) and it was replaced in 2001 by the Advanced Encryption Standard (AES)

[2] Previous work on the same subject in the 70s is attributed to English mathematicians and cryptographers at the UK Government Communications Headquarters (GCHQ) but this work was covered at that time by secrecy and remained unpublished

[3] Diffie, Whitfield and Hellman, Martin E., New Directions in cryptography, IEEE Transactions on Information Theory, 22(6): 644–654, 1976

[4] Bernstein, Daniel J., Cache-timing attacks on AEShttps://cr.yp.to/antiforgery/cachetiming-20050414.pdf

[5] Akiko Inoue and Kazuhiko Minematsu (2018–10–26). Cryptanalysis of OCB2https://eprint.iacr.org/2018/1040.pdf

[6] Niels Ferguson (2002–02–11). Collision attacks on OCB2https://csrc.nist.gov/csrc/media/projects/block-cipher-techniques/documents/bcm/comments/general-comments/papers/ferguson.pdf

[7]Joux, Antoine., Authentication failures in NIST version of GCMhttps://csrc.nist.govicsrc/media/projects/block-cipher­techniques/documents/bcm/comments/800-38-series-drafts/gcm/joux_comments.pdf

[8] Grover L.K, A fast quantum mechanical algorithm for database search, Proceedings, 28th Annual ACM Symposium on the Theory of Computing, (May 1996) p. 212

[9] Shor, P.W. (1994), Algorithms for quantum computation: discrete logarithms and factoring. Proceedings 35th Annual Symposium on Foundations of Computer Science. IEEE Comput. Soc. Press: 124–134

[10] Martin Roetteler, Michael Naehrig, Krysta M. Svore, and Kristin Lauter, Quantum Resource Estimates for Computing Elliptic Curve Discrete LogarithmsarXiv:1706.06752 [quant-ph], (2017)

[11] https://cryptome.org/2016/01/CNSA-Suite-and-Quantum-Computing-FAQ.pdf

[12] https://csrc.nist.gov/Projects/post-quantum-cryptography

[13] https://www.whitehouse.gov/briefing-room/statements-releases/2022/05/04/national-security-memorandum-on-promoting-united-states-leadership-in-quantum-computing-while-mitigating-risks-to-vulnerable-cryptographic-systems/

[14] David Joseph, Rafael Misoczki, Marc Manzano, Joe Tricot, Fernando Dominguez Pinuaga, Olivier Lacombe, Stefan Leichenauer, Jack Hidary, Phil Venables & Royal Hansen, Transitioning organizations to post-quantum cryptographyhttps://www.nature.com/articles/s41586-022-04623-2