“Religious wars are basically people killing each other over who has the better imaginary friend” ~ Napoleon
You may belong to the cult of quantum since recently it has been proven that BQP is not contained in PH, or you can be the average joe CTO who’s well educated on the basics of qubits but always go to sleep with the excuse that real, useful quantum computers are far away and you don’t have to worry about them tomorrow.
This is a very brief list of companies working on quantum computers or consulting/advising how to build them. Most of them are playing with millions of dollars and from the number of increasing headlines on the topic one can assume it follows the usual success/startup pattern -> it neither goes in the planned direction, nor with the planned speed but it’s adapting and there is SOME advance.
Let’s recap quickly for the layman who’s the enemy in this context and why am I actively fighting it in my research:
Please get that cat out of your black-box already
In 1959, Nobel prize awarded and science god Richard Feynman first stated that the behaviour of particles could be used for universal computation, which Paul Benioff formalized in 1980.
In the mainstream media you read the following nonsense:
Traditional computers represent data in a binary state, which is either 0 or 1. Quantum computers however represent them in qubits which can be both 0 and 1 at the same time.
This freakin’ cat again…
I had the opportunity to speak on a quantum-lecture series with Zimboras Zoltan whom I first heard the quote:
“The most interesting thing with this cat is not that it’s both dead and alive but that there exists a revival operator!”
In reality, the non-bigbang-theory version of this qubit misery is that a quantum system lives in a complex Hillbert space equipped with an inner product relation. A particular state in this space is a vector which has euclidean norm (with the prior inner product used) =1. An n-qubit system therefore lives in a joint system of n two-dimensional Hillbert spaces. These n spaces can form a huge number of linear combinations which can be exploited to speed up some types of computations.
But there’s a catch.
Expressing and transitioning quantum-states with tensor products are indeed formally a correct way to solve NP (and BQP) problems, plus until you don’t measure the results then indeed every possible solution is “present” in the quantum state…. it’s not deterministic. One should not explain that if you exploit the laws of nondeterminism (it’s incorrect to call Heisenberg’s or Schroedinger’s result this way) then you shall live with it’s consequences. After the bra-ket magic is done and you have polynomially or exponentially many solutions in your Hillbert space, you have to find an interference function which collapses this state to the ones that you are actually interested in, with some non-negligible probability.
This causes trivial problems like adding numbers unpractical and gargantuan problems like factoring large primes easy.
For any pair of numbers that don’t have common factors x, y by repeated squaring of xyou will eventually reach a number that is congruent 1 modulo of multiple of y.
Combining this with the Quantum Fourier Transform as our interference function (our factors are actually visible as distances between “kets” so we transform those into frequencies to collapse into a value) we can factorize large numbers with an unprecedented efficiency.
Chill I know, “but I’ve read on some second-grade medium article that Shor’s algorithm is unfeasible today because we don’t have the necessary qubits.”
Well we don’t need Shor’s original algorithm either.
Since the invention of Quantum error correction (which actually was the game-changer event to bootstrap this area of research) it became clear that stabilizer codes and better interference functions can make a quantum system almost arbitrarily noisy, which means that one can fine-tune algorithms to “more noisy” settings with a theoretically equivalent sacrifice.
I stressed the word theoretically because these compromises doesn’t always mean ecomonic equivalency thus it can make attacks financially cheaper. Thereareadvances on relaxing the original and very restrictive requirements of Shor and a recent research from google promises to factor a 2048bit RSA number in 8 hours with a fictional 20 million qubits. This number is still fictional yes but I stress again that the more you keep distance from “well-entanglement” of your qubits, the cheaper your costs become and this attack is already a hundred times more space-efficient than prior methods.
Formalizing the attacker - QROM
Knowing that Shor’s algorithm is getting stronger and that nobody cares(?) about Grover, as usual in science we needed a formal way of capturing the powers of quantum computation on our conventional security models. There are multiple techniques to do so and my favorite one is being Quantum Random Oracles. In the original ROM model we assume that the oracle behaves “ideally” uniformly random, but for the same input it outputs the same uniform random output. The problem with this conventional model is that quantum computers can feed usually polynomially and in extreme (Shor, Grover) cases exponentially many inputs to your oracle in a single interaction, which trivially results in the good ol’ “no probabilistic polynomial time algorithm can solve the instance with negligible probability” mantra being kinda obsolete.
As a defense mechanism we had to introduce a “please forget” mechanism to the oracle.
More precisely we have to simulate a destructive quantum interference with the query, either with actual quantum states or nonces. With every query (measurement) to the oracle we have to make sure prior queries are not accessible (measuring a quantum state destroys it) anymore. For the general construction please read the publication on the subject.
Mitigations and their exotic side-effects
There are many forms of defending against the general QROM or specific adversaries like Shor or Grover in literature. Amongst these there’s multivariate, Supersingular isogenyCode-based and Lattice -based cryptography which I eventually all plan to cover on this blog. There’s a most interesting recent research that states that all of the above techniques are actually a different form of code-based crypto with varying LP distance metrics. There’s also an ongoing standardizations process) for quantum-safe PKI by NIST, the same organization that standardized AES.
There’s the usual apple vs orange debate between the different methods and my favorite fruit in this shop are Lattices.
Since covering Lattices generally is planned in a following post here I’ll just reason about my enthusiasm for this method.
First they are much more efficient on hardware. This is most important in our world of security vs UX wars. One of the reasons why we don’t have truly industry-ready (Sorry Trezor, Ledger) hardware wallets with proper key handling is that scalar multiplication (even with montgomery) and especially point inversion on elliptic curves are extremely (energy) inefficient. There are still some improvements on the matter like the AMNF but they still can’t compete with the efficiency of LWE or RLWE. In most cases these two primitives can fit into 16 bit memory (a common RLWE base constant is 59393 < 2^16) and they are HIGHLY parallelizable. For example LWE and SIS primitives solely rely on (modular) matrix multiplication and RLWE on polynomial multiplication which is even more parallelizable with the NTT (finite field fast Fourier transform).
Besides being CPU and memory friendly, these methods are the swiss army knifes for the most interesting cryptographic magics of our time like fully homomorphic encryption which allows us to to arbitrary operations on encrypted data (hello GDPR), indentity based encryption which I believe could solve one of the biggest obstacle in mass blockchain adoption, Indistinguishably obfuscation which would make companies like Denuvo obsolete for it’s the first secure method of software obfuscation.
Another interesting theoretical advantage that made most cryptographers fall in love with Lattice constructions is that finally you can utilize Gaussian sampling in your security “proofs” which is much more friendly in terms of reasoning than uniform distribution. Formally, If you sample a vector from the lattice with discrete Gaussian distribution and re-map it into the fundamental parallelpiped of the lattice it is statistically “crypto-close” to uniform distribution. This is really important in our work since there’s not really much surface to grab uniform distribution in itself, it’s the ideal model for security but the other edge of this sword that you can’t argue about it well. In contrast Gaussian distributions are used in various other areas of academy and there’s a lifetime’s worth of literature in statistics on them.
In summary, if you follow the religious war on “are quantum computers here?” you see that most prior skeptics now agree that the problem is at our door and as an expert you should be familiar with the post-quantum camp. Many recent papers on generic PKI schemes now instantiate an LWE/RLWE version independently of their standing on the question. Besides this behavior which I think is crucial in our rigorous profession, these new state of the art theories are a quantum leap (ha) to the most advanced areas of security which are mostly unavailable with conventional schemes.