A few weeks ago at CES 2025, Jensen Huang, the CEO of Nvidia, suggested that practical applications of quantum computing might still be around 20 years away. In contrast, Google’s quantum lead, Hartmut Neven, recently told Reuters that we could see tangible uses for quantum computing much sooner—potentially within five years. So, who’s got it right?
Huang argues that the existing quantum systems are lacking in “qubits,” estimating a shortage of about five or six orders of magnitude. But why is there such a high demand for qubits? Research indicates that a higher number of qubits leads to fewer errors, resulting in more precise quantum computers. Let’s delve into this issue.
A qubit, or quantum bit, is fundamentally different from the standard binary bits found in traditional computers because it can represent multiple data values simultaneously. However, qubits are based on quantum particles, which often behave unpredictably. When quantum computations are performed, about one in every thousand qubits tends to “fail”—essentially deviating from their intended function—thereby disrupting the calculations.
Historically, traditional computers faced a similar challenge. Take the ENIAC computer, for instance; it relied on over 17,000 vacuum tubes to convey bits. These tubes frequently malfunctioned, which resulted in errors. The solution back then was relatively simple: we phased out vacuum tubes in favor of silicon transistors, achieving a remarkably low failure rate of one in a billion.
Unfortunately, that kind of straightforward solution isn’t applicable to quantum computing. Because qubits are quantum particles, their characteristics can’t be altered or controlled in a conventional sense; we have to work with their inherent properties.
This brings us back to the issue of inadequate qubit numbers. Just last year, Google’s Willow quantum chip demonstrated that adding more qubits reduces errors. Essentially, Google created a system of mega qubits from several physical qubits that share the same information. This setup acts as a safeguard—if one physical qubit fails, there are others that can maintain the calculation. The greater the number of physical qubits present, the more failures can be absorbed, resulting in improved accuracy.
However, since qubits are prone to failure and high accuracy is crucial for real-world applications, we’ll need a substantial quantity of qubits to make quantum computing viable. Huang believes it could take as long as 20 years to reach the required number, while Neven is optimistic, claiming it could happen in five.
So, does Google have insights that Nvidia lacks, or is this a friendly rivalry stirring the pot? At this point, the answer remains elusive. Perhaps Neven’s comments were motivated by a desire to buoy quantum computing stocks after Huang’s earlier remarks led to about an $8 billion loss in that sector.
Whenever the breakthrough occurs, Google envisions leveraging quantum computing to devise enhanced batteries for electric vehicles, develop new medications, and potentially discover alternative energy sources. While the idea of these projects materializing within a five-year time frame seems ambitious, it won’t be long before we find out just how right or wrong Neven’s predictions are.