Google has solved the longest calculation in the universe in just five minutes: here’s how it did it and the result
How long does it take to solve the longest calculation in the universe? It takes ‘only’ five minutes if you use Google.
A supercomputer, and a very fast one at that, would have taken ten septillion years, a timeframe well beyond the age of the universe itself. Instead, Google’s new quantum chip has cut the time considerably: in just five minutes, it was able to complete a calculation that would seem impossible to the human eye.
The result, published in the journal Nature, was achieved thanks to Willow, Google Quatum AI’s new quantum chip, which promises to pave the way for the construction of useful, large-scale quantum computers.
The longest calculation in the universe
Supercomputing was thought to be impossible given the ‘traditional‘ computing timescales at our disposal. As already mentioned, even with the most powerful machines we currently have, it would take 10 septillion years, a number that is difficult to even imagine, since it is a number with 24 zeros, exceeding even the age of the universe.
Impossible for anyone, but not for the new technology developed by Google, which made it possible to solve it in just five minutes. The result, published in the journal Nature, showed that the more qubits used in Google’s new quantum processing unit (Qpu), called Willow, the fewer errors there are and the system goes from classical to quantum.
![A frame taken from the video introducing Willow, Google's quantum chip that solved the longest known calculation.](https://i0.wp.com/www.pegasoftsrl.it/wp-content/uploads/2024/12/Screenshot-2024-12-11-113757.png?resize=1024%2C576&ssl=1)
How Google solved the world’s longest calculation
Willow is Google’s new quantum AI chip that lays the groundwork for building useful, large-scale quantum computers. It reduces errors exponentially as the number of computing units (qubits) used increases. Errors are one of the biggest challenges in quantum computing. Without error correction technologies, one qubit in 1,000 will fail.
Qubits rapidly exchange information with their environment, making it difficult to protect the data needed to complete a calculation. The more qubits used, the more errors can occur, making the system classical. This error rate is one of the main challenges in making quantum computers perform well enough to outperform today’s fastest supercomputers. Researchers are therefore increasingly interested in developing quantum computers with more powerful qubits that are less prone to errors.
A ‘below the threshold’ result
The historical result is defined in the industry as ‘below the threshold‘. It is crucial to prove that you are below the threshold to show progress in error correction. Researchers had been looking for a ‘solution’ since 1995, when Peter Shor first started talking about quantum error correction.
As Julian Kelly, director of quantum hardware at Google Quantum AI, explained when presenting the results of the study, Willow is “the first sub-threshold system. It is the most convincing prototype of a scalable logic qubit built to date. It is a strong signal that it is possible to build large, useful quantum computers. Willow brings us closer to running practical and commercially relevant algorithms that cannot be replicated on conventional computers ”.
He added: “What we have been able to do in quantum error correction is a really important milestone for the scientific community and for the future of quantum computing, which is to show that we can create a system that works below the quantum error correction threshold“.