In that previous column, I wrote that quantum computing “promises miraculous advances, but it also promises to tear down fundamental assumptions about numbers, about computers, that form the framework of much of how the world works today." I also suggested that quantum computing is “probably still years away from widespread use". That’s a prediction I’ll stand by, though with perhaps a little less confidence than when I first made it. For only days ago, there was news about a significant advance in quantum computing.

The qubits, they may be coming.

The news was from that 21st Century powerhouse of all things computing, Google. A team of their researchers published a paper (“Quantum supremacy using a programmable superconducting processor", *Nature*) that claimed they had achieved an “experimental realization of quantum supremacy". That is, they had produced a quantum computer, Sycamore, that was able to perform a particular task significantly faster than the most powerful existing “classical" computer could.

How much faster? Here’s where I hope you’re sitting down. Google claimed Sycamore takes “about 200 seconds" to do this job. A “state-of-the-art classical supercomputer" would take, they claimed — are you ready for this?—“approximately 10,000 years" to do the same job.

Take a moment to let that sink in. If the Google scientists are right, this is a paradigm-shattering advance, one that will have implications in nearly every facet of our lives. Parts of the technologies we are immersed in every day are founded on algorithms that are, in a real sense, very hard; meaning, they might take a very long time—like 10,000 years—to finish operation. What happens if that time sinks to just over 3 minutes?

In 2012, John Preskill, professor of physics at CalTech, invented the term “quantum supremacy" for just this moment in time: when quantum computers are able to perform tasks that classical computers can’t. This is why the Google team claim they have achieved quantum supremacy.

But getting there—if they have indeed done so—is no easy thing. To start, how do you formulate a problem that would be hard for a classical computer to solve, but easy for a quantum computer? Then there’s what you might call a meta-question: should we test computers by formulating a problem like this, one likely therefore to be an artificial one? Or by putting their noses to the grindstone with real problems?

Anyway, the problem the Google team concocted was to “sample the output of a pseudo-random quantum circuit". Never mind what exactly that means, but let me try to give you an idea of what it entails.

The power of quantum computing comes from the way a sequence of qubits can be in many different states simultaneously, instead of just one state like a sequence of classical bits would be. This notion of simultaneous and different states is pure quantum mechanics, and that’s why quantum computing is potentially so powerful. The brilliant Richard Feynman recognized this potential and suggested, over three decades ago, that quantum computing might be the way to solve difficult problems in chemistry and physics. It’s taken all these years for technology and design to catch up to that vision.

Sycamore is essentially a grid of 54 qubits: nine rows of six each. Each qubit on the periphery of the grid connects to its two neighbours; each qubit that’s internal to the grid connects to four neighbours. Each qubit is also connected to a device that tells us its state when we choose to check it. In fact, we can actually read off the state of all the qubits at the same time. It turned out that one of Sycamore’s qubits was faulty, leaving it with 53 qubits and 86 couplers linking them.

Once the grid was set up and ready for action, what happened was what the team’s paper described as “a repeated application of single-qubit and two-qubit logical operations." Think of a grid of light bulbs that you want to test. So you light them up at random, again and again, checking the results each time. Now bulbs can be either on or off, period. Qubits are like that too—but only when you actually check them. Until you do that, quantum mechanics tells us that they are effectively both on and off at the same time, and it is the very act of observing that decides whether a given one is on or off. Again, don’t furrow your brow over this; just remember that qubits are fundamentally different from bulbs. Or bits.

Analogous to lighting up the bulbs, Google’s researchers subjected the qubits to those logical operations—one and two at a time, over and over again. Come time to check the state of the qubits—all 53, thus checking the state of the entire grid — and this check produces, says the paper, “a set of bitstrings, for example 0000101, 1011100 …".

The Google paper draws a fascinating parallel here to Thomas Young’s famous physics “double-slit" experiment from 1801. Aim a beam of light at two narrow slits; what do you see on the wall behind the slits? Not two narrow lines or dots of light. Amazingly, you see instead a pattern of alternating dark and light spots; an “interference pattern". This is how Young showed that light is made up of waves that “interfere" with each other beyond the slits. Just as some spots in the pattern are light and some are dark, quantum mechanics says this about the strings of 0s and 1s produced by our manipulation of qubits: some strings are more likely to appear than others.

In essence and somewhat simplified, this is the problem the Google team set Sycamore to work on: can we calculate the probability of each string appearing? For as we increase the number of qubits and the number of times we apply those logical operations, this problem becomes “exponentially more difficult" for a classical computer to tackle. But for a quantum computer? Well, remember 10,000 years and 200 seconds.

There’s plenty more in that Google paper, fair chunks of it well beyond my understanding. But interestingly, it prompted a response from researchers at IBM, contesting the claim that the Google guys had achieved quantum supremacy at all.

The figure of 10,000 years, they pointed out, was based on assumptions about limitations in computer memory and the consequent trade-off between storage space and computation time. But classical computers, they also pointed out, have many other hardware and software capabilities that can help speed up computations. Those should be accounted for in any comparison to quantum computing. If we do this, wrote the IBM scientists, “an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity." Not many years, but two-and-a-half days. Still longer than 200 seconds, but not as mind-numbingly longer as Google’s estimate of 10,000 years. In fact, the IBM paper called the 2.5 days “a conservative, worst-case estimate … that with additional refinements … can be further reduced."

Thus this is not a task, said the IBM team, that a classical computer can’t do but a quantum computer can. Thus we haven’t reached the bar that John Preskill set. We haven’t achieved quantum supremacy. They suggest that claims of quantum supremacy should be greeted with plenty of scepticism, because of the extreme difficulty of testing and verifying any quantum computing results.

As an aside, this is why I so love mathematics and science. For any claim is invariably greeted with scepticism and the effort to replicate and verify (even debunk if necessary). It’s scepticism that makes science more robust.

So will we ever find an appropriate task that will allow a legitimate, verifiable claim of quantum supremacy? I don’t know. But the IBM scientists also make a point worth thinking about through all the furore about quantum computing.

Quantum computers, they write, “will never reign ‘supreme’ over classical computers, but will rather work in concert with them, since each have their unique strengths."

The qubits may be coming. But don’t count out those bits just yet.

*Once a computer scientist, Dilip D’Souza now lives in Mumbai and writes for his dinners. His Twitter handle is @DeathEndsFun*