When industry insiders talk about a future where quantum computers can solve problems that classical, binary computers can’t, they’re referring to something called the “quantum advantage.”
To achieve this advantage, quantum computers must be robust enough to scale in size and efficiency. Quantum computing experts believe that the biggest barrier to the scalability of quantum computing systems is noise.
Related: Moody’s launches quantum-as-a-service platform for finance
The Harvard team’s research paper, titled “Logical quantum processor based on reconfigurable atom arrays,” describes a method by which quantum computing processes can be run with error-resistance and the ability to overcome noise.
According to the paper:
“These results herald the advent of early error-corrected quantum computation and chart a path toward large-scale logic processors.”
Insiders refer to the current state of quantum computing as the Noisy Intermediate-Scale Quantum (NISQ) era. This era is defined by quantum computers with less than 1,000 qubits (the quantum version of a computer bit) which are, by and large, “noisy.”
Noisy qubits are a problem because, in this case, it means they are prone to errors and mistakes.
The Harvard team claims to have achieved “early error-corrected quantum computations” that overcome noise at world-first scales. Judging by their paper, they haven’t achieved full error correction, though. At least not like most of the experts are probably looking at it.
Errors and measurements
Quantum computing is difficult because, unlike a classical computer bit, qubits usually lose their information when they are measured. And the only way to know if a given physical qubit has experienced a computational error is to measure it. Th
Full error correction will require the development of a quantum system capable of detecting and correcting errors as they arise during the calculation process. So far, these techniques have proven very difficult to scale.
What the Harvard team’s processor did, instead of correcting errors during calculations, was to add a post-processing error-detection phase where erroneous results are identified and rejected.
This, according to the research, provides a new and, perhaps, accelerated path for the scaling of quantum computers beyond the time of NISQ and the field of quantum avantage.
While the work is promising, a DARPA press release indicates that at least an order of magnitude more than the 48 logical qubits used in the team’s experiments are needed to “solve any major problems that envisioned for quantum computers.”
The researchers claim that the techniques they have developed should be scalable to quantum systems with more than 10,000 qubits.