This processor is state-of-the-art for silicon quantum computing. It's where modalities like superconducting were 15 years ago, and superconducting does not create noise these days https://www.nature.com/articles/s41586-024-08449-y
Sure, I'm not disagreeing that this processor is noisy, just providing enough context to say that it's fine. Historically, these devices improve enough to be under threshold at which point it doesn't matter that they are noisy cause error correction protocols can be run on top of them.
What are the real world use cases now, today? The only thing I see in the QC space, are QC stocks and funding paying for the employment of scientific experimentation, which isn't a real world application.
Do I have to wait 15 to 30 years for a series of real world changing breakthroughs that I can already do on a NVIDIA GPU card?
That doesn't exponential at all, in fact that sounds very very bearish.
I think the point being made is that the graphs don't show real world applications progress. Being 99.9999999% or 0.000001% of the way to a useful application could be argued as no progress given the stated metric. Is there a guarantee that these things can and will work given enough time?
Quantum theory says that quantum computers are mathematically plausible. It doesn't say anything about whether it's possible to construct a quantum computer in the real world of a given configuration. It's entirely possible that there's a physical limit that makes useful quantum computers impossible to construct.
Quantum theory says that quantum computers are physically plausible. Quantum theory lies in the realm of physics, not mathematics. As a physical theory, it makes predictions about what is plausible in the real world. One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.
The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.
> One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.
Quantum theory doesn't predict that it's possible to build a large scale quantum computer. It merely says that a large scale quantum computer is consistent with theory.
Dyson spheres and space elevators are also consistent with quantum theory, but that doesn't mean that it's possible to build one.
Physical theories are
subtractive, something that is consistent with the lowest levels of theory can still be ruled out by higher levels.
Good point. I didn't sufficiently delineate what counts as a scientific problem and what counts as an engineering problem in QC.
Quantum theory, like all physical theories, makes predictions. In this case, quantum theory predicts that if the physical error rate of qubits is below a threshold, then error correction can be used to increase the quality of a logical at arbitrarily high levels. This prediction can be false. We currently don't know all of the potential noise sources that will prevent us from building a quantum logic gate that is of similar quality as a classical logic gate.
Building thousands of these logical qubits is an engineering problem similar to Dyson spheres and space elevators. You're right that the lower levels of building 1 really good logical qubit doesn't mean that we can build thousands of them.
If our case, even the lower-levels haven't been validated. This is what I meant when I implied that the project of building a large-scale QC might teach us something new about physics.
> The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.
If "this experiment" is trying to build a machine, then failure doesn't give much evidence against the theory. Most machine-building failures are caused by insufficient hardware/engineering.
Quantum theory predicts this: https://en.wikipedia.org/wiki/Threshold_theorem. An experiment can show that this prediction is false. This is a scientific problem not an engineering one. Physical theories have to be verified with experiments. If the results of the experiment don't match what the theory predicts then you have to do things like re-examine data, revise the theory e.t.c.
But that theorem being true doesn't mean "they will work given enough time". That's my objection. If a setup is physically possible but sufficiently thorny to actually build, there's a good chance it won't be built ever.
In the specific spot I commented, I guess you were just talking about the physics part? But the GP was talking about both physics and physical realization, so I thought you were also talking about the combination too.
Yes we can probably test the quantum theory. But verifying the physics isn't what this comment chain is really about. It's about working machines. With enough reliable qubits to do useful work.
You're right. I didn't sufficiently separate experimental physics QC from engineering QC.
On the engineering end, the question on if a large-scale quantum computer can be built is leaning to be "yes" so far. DARPA QBI https://www.darpa.mil/research/programs/quantum-benchmarking... was made to answer this question and 11 teams have made it to Stage B. Of course, only people who believe DARPA will trust this evidence, but that's all I have to go on.
On the application front, the jury is still out for applications that are not related to simulation or cryptography: https://arxiv.org/abs/2511.09124
Publishing findings that amount to an admission that you and others spent a fortune studying a dead end is career suicide and guarantees your excommunication from the realm of study and polite society. If a popular theory is wrong, some unlucky martyr must first introduce incontrovertible proof and then humanity must wait for the entire generation of practitioners whose careers are built on it to die.
Quantum theory is so unlikely to be wrong that if large-scale fault tolerant quantum computers could not be built, the effort to try to build them will not be a dead end, but instead a revolution in physics.
Silicon is not one of the leading modalities for quantum computers, but it has progressed a lot in the past ~2-3 years. Here are a few key advancements that have happened as of late:
"early days" means that the 1998 computer didn't have qubits that were below the error correction threshold. Now we have hundreds of qubits below threshold. We'll need millions of qubits like these for quantum computing to be useful. If that take decades, this is the "early days" relatively.
Depends on what we mean by "early days on hardware".
If we mean "we've have been working on this for almost 3 decades. That's a very long time to be working on something!". I agree.
If we mean "We just now only have a few logical qubits that outperform their physical counterparts and we'll need thousands of these logical qubits to run anything useful" then we are still in the early days.
Yes, QC is far enough that it's "anyone's guess", but the field is actively working on sliding the answer to this problem from "anyone's guess" to "a bit more certain". It will never be 100% certain until the useful QC appears but we can decrease the probability of our predictions being pure guesswork. As an example, DARPA is funding a project to find the first high impact QC applications https://www.darpa.mil/work-with-us/publications-highlighting... along with finding when the first hardware to run those applications can be built https://www.darpa.mil/work-with-us/quantum-benchmarking-init....
QC startups should be funded because industry is a crucial component of QC progress and large-scale QC labs (Google, IBM e.t.c) can't work on all the ideas. The ideas that come from startups do accelerate QC development.
A quantum internet is absolutely necessary for creating a useful quantum computer, the same way the internet (LAN) is needed to create a supercomputer. A supercomputer is essentially many computers connected together. A quantum computer that solves problems we care about will be similar: https://arxiv.org/abs/2212.10609.
Still, it seems like what is needed here is more a quantum LAN, or possibly even just an on board interconnect between quantum processors. The focus on wide area quantum networks feels a bit odd.
One application we care about is using quantum computers to build high resolution telescopes https://arxiv.org/abs/1107.2939. A wide area network is required because the telescopes need to be far apart.
Although the prospects for using quantum computers to solve classical problems are pretty bleak, the primary motivator for the invention of quantum computers was not to solve classical problems, but to solve quantum ones: https://tinyurl.com/3ndp36y7.
Think of early quantum computers as tools for scientific discovery, not for addressing industrial problems. Their abilities to solve commercial problems comes later, that is, decades from now.
Every communication or marketing I have ever seen done by quantum computing actors has been about very "classical" problems: finance, clean energy, AI...
It's all snake oil, obviously. Those are keywords thrown out for VC money. IMO, there would be no way for this many companies to raise this much money if the investors knew what kind of problems quantum computing is really addressing.
The people who claim that current quantum computers are useful for classical problems contribute to "Quantum hype" which is frowned upon by most members of the community.
I mean, I wish. I have met these so-called giants of the field when I was at conference (they had a yearly big-brains meeting the same time, same place). I bet they talk about classic stuff because otherwise they wouldn't get funding. Quantum mechanics and using quantum computers for quantum problems actually would have convinced me. But who cares about me. What they need this is this thing called MONEY and that doesn't come with intellectually interesting problems -- it comes with overinflated claims over things that the committee understands. So classical problems it is. Can't blame them playing the game, but at the same time, I wonder how they look into the mirror at night.
To be fair, this is what academics in basically every field do.
If you prove a useless result about an exotic construction in your niche topology, you mention that recently topology has been successfully applied i.e. in data science. If you study some pathological convergencs properties if unheard of stochastic processes, you cite Black-Scholes equation and remind the reader of its importance in finance.
Computational physicists have been thinking about algorithms for simulating quantum systems essentially since computers were invented. We have decent algorithms for approximating ground states, or for systems in equilibrium (contingent on it being spin-balanced, or at half-filling, or at 0 density, ... depending on the model), or in other limited circumstances.
But lift any of those special restrictions, and simulation methods hit a sign problem [sign]. In particular, real-time evolution of quantum systems, which is what a quantum computer does by its very nature, poses in some sense the most difficult sign problem for approaches leveraging classical computing.
That's not a proof that classical algorithms can't become more capable, but it's almost certainly a question that must be answered system-by-system. The generic sign problem is NP-hard, so special-case reasoning is required.
The reason those lattice field theory computations are done that way is that they provide stochastic but polynomial-time algorithms for exactly the same kind of exponentially-large state space that appears in quantum chemistry.
> Is there a place for quantum computers if classical algorithms become more capable at simulating quantum mechanics in ways we find useful?
There is not. Our existence as a field pretty much hinges on classical computers not being able to simulate all quantum mechanical problems efficiently. We imagine that designing quantum matter: https://cognitivemedium.com/qc-a-science, https://arxiv.org/abs/1508.02595 will be very useful in the scientific and technological sense and we don't think classical computers will ever fully stand up to that task.
> Breaking crypto, unless that falls too
If classical computers can simulate quantum efficiently then using quantum computers to break crypto also falls. Simulating quantum physics and factoring are in the same complexity class: https://en.wikipedia.org/wiki/BQP
> Our existence as a field pretty much hinges on classical computers not being able to simulate all quantum mechanical problems efficiently.
I don't think this is quite accurate. It could be that many of the kinds of quantum simulations we care about can be done efficiently classically, even if the worst-case quantum simulations are classically intractable. Certainly, classical simulation algorithms are steadily improving.
Right. We are now arguing over the nuances of what would make quantum computers useful, which I address in a comment where I say "Everything matters" later in this thread.
Most people who work in this field doubt that every quantum simulation problem we care about will be classical tractable in practice, that is, non worst-case. If we believed that, we might as well give up and continue to use the robust, mature classical computers we have and will continue to have better instances of for the foreseeable future.
I agree. "Everything" matters when it comes to these applications: complexity theory, heuristics, constant factors, quantum error correction overhead, qubit quality, improvements in classical algorithms, CPU and GPU improvements e.t.c. Doesn't make sense to put too much stock in just one of these components at the cost of others.
> Think of early quantum computers as tools for scientific discovery, not for addressing industrial problems. Their abilities to solve commercial problems comes later, that is, decades from now.
Well, they might become very useful for simulations in material science, even if they 'only' thing they can do better than normal computers is simulate quantum physics.
Yes. It's a spectrum. In the worst case, quantum computers only help us gain a deep understanding of quantum physics. In the best case, they beat classical computers on optimizations problems as well. Materials science falls somewhere along this spectrum.
Yes. Though it's more than a one dimensional spectrum:
There's also the orthogonal possibility that quantum computers don't work, or don't work well, and eventually we'll learn some new physics that tells us why. (Given that orthodox quantum mechanics says that quantum computers work, but so far they've been hard to do. It's most likely 'just' engineering issues, but there's still the possibility of something deeper.)
reply