Scientists Prove a Quantum Computing Advantage over Classical
Quantum Computing is getting a significant amount of attention these days, especially since IBM put a real 5-qubit machine online in May 2016 (IBM Q Experience).
Can they really perform some calculations faster than classical computers can?
How do you characterize those areas where they can or potentially can do better?
Can you prove it?
In 1994 Peter Shor formulated his eponymous algorithm that demonstrated how to factor integers on a quantum computer almost exponentially faster than any known method on a classical computer. This is getting a lot of attention because some people are getting concerned that we may be able to break prime-factor-based encryption like RSA much faster on a quantum computer than the thousands of years it would take using known classical methods. However, people skip several elements of the fine print.
First, we would need millions and millions of extremely high quality qubits with low error rates and long coherence time for this to work. Today we have 50.
Second, there’s the bit about “faster than any known method on a classical computer.” Since we do not know an efficient way of factoring arbitrary large numbers on classical computers, this appears to be a hard problem. It’s not proved to be a hard problem. If someone next week comes up with an amazing new approach using a classical computer that factors as fast as Shor’s might, then the conjecture of it being hard is false. We just don’t know.
The basic computational unit in quantum computing is a qubit, short for quantum bit. While a classical bit is always 0 or 1, when a qubit is operating it can take on many other additional values. This is increased exponentially, with the potential computational power doubling each time you add an additional qubit through entanglement. The qubits together with the operations you apply to them are called a circuit.
Today’s qubits are not perfect: they have small error rates and they also only exist for a certain length of time before they become chaotic. This is called the coherence time.
Because each gate, or operation, you apply to a qubit takes some time, you can only do so many operations before you reach the coherence time limit. We call the number of operations you perform the depth. The overall depth of a quantum circuit is the minimum of all the depths per qubit.
Since error rates and coherence time limit the depth, we are very interested in short-depth circuits and what we can do with them. These are the practical circuits today that implement quantum algorithms. Therefore this is a natural place to look to see if we can demonstrate an advantage over a classical approach.
There is. It was proved by scientists Drs. Sergey Bravyi of IBM Research, David Gosset of the University of Waterloo’s Institute for Quantum Computing, and previously of IBM Research, and Robert König of the Institute for Advanced Study and Zentrum Mathematik, Technische Universität München.
It’s now published in Science as “Quantum advantage with shallow circuits.”
The width of a circuit, that is, the number of qubits, can be related to the required depth of the circuit to solve a specific kind of problem. Qubits start out as 0s or 1s; we perform operations on them involving superposition and entanglement; and then we measure them. Once measured, we again have 0s and 1s.
What the scientists proved is that there are certain problems that require only a fixed circuit depth when done on a quantum computer even if you increase the number of qubits used for inputs. These same problems require the circuit depth to grow larger when you increase the number of inputs on a classical computer.
To make up some illustrative numbers, suppose you needed at most a circuit of depth 10 for a problem on a quantum computer no matter how many 0s and 1s you held in that many qubits for input. In the classical case you might need a circuit of depth 10 for 16 inputs, 20 for 32 inputs, 30 for 64 inputs, and so on for that same problem.
This conclusively shows that fault tolerant quantum computers will do some things better than classical computers can. It also gives us guidance in how to advance our current technology to take advantage of this as quickly as possible. The proof is the first demonstration of unconditional separation between quantum and classical algorithms, albeit in the special case of constant-depth computations.