From chess to quantum advantage: Climbing the ladder of computational complexity one checkmate at the time

By Tommaso Demarie, CEO and co-founder at Entropica Labs.

A game of chess

I love playing chess.

I love to feel the intellectual strain during a game, the focus while constructing a strategy during the opening, and the excitement for a great execution (or despair for a foolish move).

One of the greatest chess players in history, and a personal favourite of mine, is Garry Kasparov.

Kasparov was notable for his aggressive style on the board. At his prime, he mastered the art of sacrificing pieces to gain positional advantage from where he would launch devastating attacks. Interestingly enough, it is usually possible to find countermoves to a sacrifice in chess, but — from a human perspective — that requires precise and heavy computational analysis of moves and their exponentially many ramifications. With finite time on their hands, and under a huge amount of pressure, most players were unable to unravel Kasparov’s riddles and would end up brutally beaten.

In 1985, Kasparov defeated Anatoly Karpov to become the World Chess Champion.

A few months earlier, Kasparov played simultaneous games of chess against 32 of the world’s best chess-playing computers in Hamburg, sent by the four leading chess machine manufacturers. Kasparov annihilated the machines, achieving a perfect 32–0 score. But that was just winning a battle in the long race against the machines.

“If a machine could beat the best human player at chess, would that machine be considered intelligent?” For most of the last century, the answer would have been yes. Computer chess had been considered the embodiment of accomplishments in artificial intelligence, to the extent of chess being termed the Drosophila of AI.

Today, however, computers have reached an exceptional level of sophistication in games. It is hence inconceivable to imagine any human being able to defeat them not only in chess, but also in more complex games like Jeopardy, Go, and Poker.

Part of the reason why we remain hesitant to call these machines intelligent — I believe — is caused by our fear of accepting that intelligence can be created out of plastic, wires and transistors. We also sense that human intelligence is determined by more than seeing a checkmate 30 moves ahead. Thus, the goalposts keep moving, and the defining moment in artificial intelligence — when a machine and a human will be indistinguishable, seems to remain in the very distant future.

But let us go back to Kasparov.

In 1997, at the peak of his glory, the great Russian player joined Deep Blue — a chess machine built and programmed by IBM, for a second match (after the human’s victory in a first mach a year earlier). The stakes were extremely high, “What if the computer wins?”, people wondered.

The cover of Newsweek titled “The Brain’s Last Stand”, to signify the cultural weight of the event. From that edition of the magazine, “[…] the question has arisen: can the master of a pursuit once thought of as the pinnacle of human intelligence beat something that has never lived?” Kasparov was adamant, “I never lose. I have never lost in my life.”

Until he lost. To a box of transistors.

In game 2, Kasparov described the machine “playing like a god, for one moment”. Deep Blue dominated Kasparov, physically and mentally, and humanity witnessed a real breakthrough in computer science, whose consequences continue to remain relevant.

These developments redefine our understanding of what lies at the interplay of information theory, engineering, and physics. They also contribute to envision the ever greater possibilities ahead.

However, these moments are also very rare. Due to the rapid advancements in artificial intelligence and machine learning, it has become increasingly hard to single out equally exciting eureka moments. After you have been to the moon, any other successful launch — even if extraordinary per se — becomes a routine (unless you make it to Mars, or put a car in space).

Turing machines and quantum physics

The exciting news is that we are reaching another breakthrough in computation, one that could see the beginning of a new computational revolution.

This breakthrough goes by the name of quantum supremacy, or as we prefer to say Quantum Advantage. It means providing solid evidence that a quantum computer can solve a problem that a conventional, classical computer, would only be able to solve at unreasonable expenses of resources (either time, energy, or processing power).

To better understand the reach and implications of such an event, it is useful to go back in time a little and better understand what a computing machine actually is, and what are its limitations.

As with many stories in the history of computation, our starting point is Alan Turing, one of the greatest minds of the 20th century.

Scientifically, Turing’s most exceptional contribution was defining how to think about computing machines. In 1936, he conceived what we now call a Turing machine.

A Turing machine is a abstract machine described as an infinitely long rewritable tape divided in cells (a memory), equipped with a pin that can move back and forth along the tape. Each cell can carry exactly one symbol, and the pin can read and edit one cell at a time. The pin follows a series of instructions, and its behaviour is completely determined by the configuration of the machine at each step (more rigorous details can be found here).

Equipped with a Turing machine, we can take a leap and think a little harder about the physical laws of the universe.

It is believed that all physically computable functions, in other words all processes that happen in the universe, can be mapped to — and computed using a Turing machine, even if it might take an extremely long time to do so.

This far-reaching hypothesis is known as the Church-Turing thesis, an idea that has helped us to look at the physical world through the lenses of information theory. It suggests that a single hypothetical model is sufficient to describe all physical processes.

Well, there is more to add to the story. In an effort to understand the complexity of Nature, few decades ago researchers proposed an extension to the Church-Turing thesis. This hypothesis states that all physical processes are computationally simple. Not only they can be simulated by a Turing machine, but this can be done at a reasonable expense of computational resources.

Wow. This means that anything that happens in Nature, such as protein folding, black hole physics, boson sampling, can be reproduced efficiently by a conventional computer (our practical version of a Turing machine).

The Extended Church-Turing thesis is a bold claim. And to challenge bold claims we need an intrepid hero: This is where quantum physics joins the show.

What if there were quantum phenomena intrinsically hard to reproduce efficiently using classical computers? Their existence would invalidate the Extended Church-Turing thesis. It would also pave the way to the possibility that a computing device that were quantum itself, could outperform classical computers in certain tasks.

Chemists and material scientists have struggled with this issue for decades: Simulating the structure of complex molecules and materials has always been an unsolvable computational hurdle. It felt like Nature would draw the curtain over whatever happens at small scales, making it hard to comprehend it using classical resources. This realisation inspired Feynman, at a conference co-organised by IBM and MIT in 1981, to suggest that our conventional computational paradigm was incomplete. Nature is quantum, and hence the simulation of its most intimate aspects should require a quantum computer.

Checkmate, classical computing

The long introduction served to capture the essence of what quantum advantage is all about.

We believe (with some notable exceptions) that quantum computers are able to solve problems that would forever be infeasible by using only classical machines.

But this advantage has never been tested experimentally.

However, thanks to the exceptional technical advancements towards building larger and more accurate quantum processors, it is very likely that 2019 will be the year quantum computers will show their unique edge over classical machines.

What billion-worth problem will be used to show quantum advantage? Will someone crack all cryptographic protocols over the internet? Will quantum computers destroy the blockchain? Will they be used to find a cure for cancer?

None of this (yet).

The proposed technique to show strong evidence of quantum advantage is, embarrassingly, rather obscure and dull — unless you are a mathematician (or a computer scientist, a information theorist, the QWA team, you get the gist). It is known as Random Circuit Sampling, and was put forward by the Google team (with the theoretical foundation recently published in Nature Physics).

The task is simple, and can ideally be performed on near-term quantum devices: It consists in performing repeated measurements to sample bit-strings from the output of random quantum circuits.

Bristlecone, the 72-qubit quantum chip Google plans to use to show quantum advantage — © Google Quantum AI LAB

The underlying reasoning? Quantum systems can be naturally correlated in fashions that classical systems cannot achieve.

Quantum correlations, to be fully reconstructed classically, require a very large amount of memory. And the memory requirements grow exponentially fast with the size of the quantum system one is trying to simulate. For this reason, any reasonably large classical computer would not be able to fully recreate the outputs due to the quantum correlations. It would then fail in reproducing the results of random circuit sampling. This is our initial gateway to quantum advantage.

Google is working together with NASA to demonstrate quantum advantage through random circuit sampling within the next few months. NASA will use their classical supercomputer Pleiades to compare the performances of Google’s quantum chip. This will serve as validation of any claim of quantum advantage.

Showing quantum advantage would be an experimental violation of the Extended Church-Turing thesis. This would mean that some natural processes are more equal than others.

The next step would then be designing quantum computers and algorithms that can solve practical problems better — in the sense stated above — than classical computers.

The folks at Rigetti computing have contributed to spice things up, by announcing the Rigetti Quantum Advantage Prize. This is a $1M award for the first conclusive demonstration of quantum advantage on a real industry problem using their Quantum Cloud Services platform. They believe such demonstration of quantum advantage is likely to happen in the next two to five years.

The way we think about the world shapes our ability to interact with it. In this sense, quantum computing is exceptional because, to paraphrase Google’s quantum team’s words — integrates the two largest technological and intellectual revolutions of the last century, information theory (and the related computing technology) and quantum mechanics.

Quantum advantage is going to be a milestone in the history of computer science, like Deep Blue victory over Kasparov. In a similar way though, the goalposts will keep moving. Quantum advantage on an esoteric problem will be exciting at first, but will soon become routine.

The extended community, including governments, academic groups, tech giants, startups and investors, will have to work closely together to push the frontier of quantum enabled innovations. This is a good thing, because a similar approach has been very successful throughout the development of classical computing.

The challenges ahead

The most fascinating aspect of the effort towards an unambiguous proof of quantum advantage is that classical computing is not ready to concede just yet. Despite the construction of quantum chips of ever-growing complexity and capabilities, classical computing continues to strike back.

In a plot-twist that would have delighted Agatha Christie, the 2018 has seen the demolishment of two quantum algorithms believed to offer quantum advantage over their best classical counterparts. The effort was led by Ewin Tang, a 18 year old computer scientist who proved that the separation between problems that require a quantum computer to be solved and those that do not, is less obvious than previously thought.

I argue that these results are a huge success.

Improving classical algorithms by “thinking quantum”, is a very powerful avenue that could lead to exceptional results beyond the model of computation used.

Quantum and classical are engaged in a formidable race, pushing each other to their limits. The good news? We are all going to be victors, benefiting from great advancements in our computational abilities.

A more serious challenge I foresee, concerns the explosive growth that the field of quantum computing will experience following the announcement of quantum advantage.

Today, there is already a lot of excitement about the potential of quantum computation, and it is difficult for non-experts to differentiate between legitimate and solid claims, and more science-fiction ones.

In the last years, AI and blockchain (to name a few) have experienced an outbreak of companies promising solutions beyond the current capabilities of the technologies. Equally, it is very likely that the quantum ecosystem will suffer from unwarranted hype. This is a problem, especially for a nascent technology that has yet much to prove.

In this sense, we welcome the recent partnership programs among major quantum hardware producers and quantum software companies. They are strong illustrations of the great efforts within the community to bring quantum computing to the mainstream, in a careful but steady and credible way.

QWA is helping to bridge the gap between quantum and business. The go-to for providing suggestions, feedback and questions is our email address