A team of researchers led by Jeff Thompson at Princeton University has achieved a significant breakthrough in the development of quantum computers. They have developed a novel method that enables the detection of errors in quantum computers, making them up to 10 times easier to correct. This advancement holds immense potential for accelerating progress towards large-scale quantum computers capable of solving the world’s most complex computational problems.
Traditionally, the focus of quantum computing hardware research has been on minimizing the probability of errors occurring. However, the team led by Jeff Thompson took a different approach. They devised a method to identify errors that occur in quantum computers more easily than ever before, rather than solely focusing on preventing errors. This marks a new direction for quantum computing hardware research.
The team’s findings were published in the prestigious journal Nature on October 11. The paper outlines the details of their innovative approach. Collaborators on this project include Shruti Puri from Yale University and Guido Pupillo from Strasbourg University.
Despite decades of advancements in qubit technology, which is the fundamental building block of quantum computers, errors are inevitable. The central challenge in the development of quantum computers lies in the ability to correct these errors. Detecting the occurrence of errors, along with their specific location, is crucial for the error correction process. However, the process of error detection often introduces more errors, leading to a cycle of error detection and correction.
According to Jeff Thompson, not all errors in quantum computers are created equal. Hence, biasing certain types of errors presents an opportunity for improvement. Thompson’s team focuses on a specific type of quantum computer based on neutral atoms. Within their experimental setup, qubits are stored in the spin of individual ytterbium atoms. The researchers used an array of 10 qubits to analyze error probability while manipulating them individually and in pairs.
The results of the study showcased impressive error rates near the state-of-the-art for this type of system. The error rate was found to be 0.1 percent per operation for single qubits and 2 percent per operation for pairs of qubits.
The most significant outcome of this research is not just the low error rates but also the ability to characterize errors without damaging the qubits. By utilizing a different set of energy levels within the atom to store the qubits, the researchers were able to monitor the qubits in real time during computation. This innovative measurement method allowed qubits with errors to emit light, while error-free qubits remained unaffected.
This approach effectively converts errors into a specific type known as erasure errors. The eradication of these errors has been studied extensively in the context of qubits made from photons, where they are comparatively simpler to correct. However, this research marks the first application of the erasure-error model to matter-based qubits. The adoption of this model aligns with a theoretical proposal from Thompson, Puri, and Kolkowitz of the University of California-Berkeley in 2022.
During the demonstration, approximately 56 percent of one-qubit errors and 33 percent of two-qubit errors were detectable before the experiment was concluded. Remarkably, the act of error detection did not significantly increase the rate of errors, demonstrating its feasibility for practical implementation. Further engineering efforts are expected to enhance the fraction of errors detected.
The researchers anticipate that with this new approach, nearly 98 percent of all errors can be detectable with optimized protocols. This has the potential to reduce the computational costs associated with error correction by an order of magnitude or more.
Other research groups have already begun adapting this error detection architecture. Independent teams at Amazon Web Services and Yale have demonstrated how this paradigm can enhance systems utilizing superconducting qubits.
Jeff Thompson emphasizes the versatility of their approach, which can be applied flexibly in various qubits and computer architectures. He states, “We need advances in many different areas to enable useful, large-scale quantum computing. What’s nice about erasure conversion is that it can be used in many different qubits and computer architectures, so it can be deployed flexibly in combination with other developments.”
The breakthrough achieved by Thompson and his team represents a pivotal step forward in quantum computing. It brings us closer to realizing the full potential of these extraordinary machines, paving the way for solving complex computational problems that were inconceivable with traditional computers.