Ever since D-Wave debuted its quantum annealing processor there’s been a lively discussion around whether or not quantum annealing qualifies as “real” quantum computing, whether the D-Wave system is actually performing quantum annealing or not, and the best way for investigating the question to begin with. It’s something of a delightful mess and quite thematically appropriate.
Last year, Google announced that it would be working with NASA to deploy a D-Wave system, and the team has just published an updated set of benchmark data and test results. There’s some extremely interesting data here and the report is written in a fairly accessible way. The quick takeaway is that while the D-Wave system is tens of thousands of times faster than off-the-shelf conventional solutions, it runs into comparative trouble against custom-optimized conventional (“classical”) hardware.
Two teams of researchers wrote customized problem solvers that would account for the sparse connectivity of the D-Wave processor. That term refers to the fact that while the D-Wave chip (pictured above) at Ames has 509 qubits, every qubit doesn’t connect to every other qubit. Instead, the various bits connect in a branching structure, as shown below:
Connections inside each group of eight are much tighter than between the groupings — this is known as sparse connectivity. The three dead qubits are shown in red. According to the Google post, two teams of people have created custom problem solvers — one running on Nvidia GPUs and one is described as a “tailor-made” custom problem solver.
Put the three up against each other, and an interesting pattern emerges. In the graphs below, a flatter line indicates that the time to solve each problem does not increase much as the problem becomes more difficult. A sharp upwards line indicates that the solving time grows a great deal as difficulty increase.
Google’s conclusion is not that D-Wave’s computer is or isn’t quantum, but that the chip is itself in early stages. Sparse connectivity sharply limits performance and Google is optimistic that we’ll see different results once more advanced cores come to market. On the whole, we’re inclined to agree. Right now, quantum computing design is still in the very earliest stages, so it’s not surprising that off-the-shelf, highly optimized classical solutions can match the performance of our current quantum designs.
Continued scaling in terms of more connectivity and more qubits will be essential to delivering on D-Wave’s long-term performance claims. But with the interest (and dollars) flowing in, we’ll likely see future models in short order — though, even then, it still remains to be seen if D-Wave’s quantum annealing can compete with the real, maximally entangled quantum computing systems being developed by the likes of IBM.