Loading Scale Physics...
Your device does not support WebGL2, so interactive animations are not available. All text content and images are fully accessible.
Updated May 2026
9 min read

Computability of Physics

When Equations Refuse to Be Solved

A Strange Hidden Assumption

Most physics rests on a quiet assumption that almost no one writes down. The assumption is that, in principle, you could compute the answer to any physical question. Given the equations of motion and the initial conditions, you can crank a calculation forward and predict what happens. Maybe the calculation takes a long time. Maybe it requires absurd amounts of memory. But it can be done, eventually, by some sufficiently patient computer. Physics is, in this view, a giant arithmetic exercise that nature has solved in real time and that we are slowly catching up to.

This assumption turns out to be wrong. Some of the most natural questions you can pose about physical systems are provably uncomputable – not in the sense that they are hard, but in the strict mathematical sense that no algorithm can answer them, ever, regardless of computer speed or available memory. Other questions are computable in principle but require astronomical amounts of computation to answer in practice. The interplay between physics and computability is now a serious research topic, with implications for what we can predict, what we can simulate, and what we can ever know.

The Halting Problem

The first cracks appeared in the 1930s in pure mathematics. Alan Turing was thinking about what an "algorithm" really meant, formalizing the intuitive notion as a simple machine that reads and writes symbols on a tape. Then he asked: is there an algorithm that takes any other algorithm as input, plus an input for that algorithm, and decides whether the algorithm will eventually halt or run forever? In 1936 Turing proved there is not. The halting problem is undecidable. No matter how clever you are, no algorithm can correctly answer "will this program ever stop?" for every possible program.

A simple machine reads its tape - but no one can always predict whether it stops

The proof is short and beautiful. Imagine someone claims to have built a perfect oracle – a program that can look at any other program and always correctly say whether it will eventually stop or run forever. Now we play a trick: we write a new program that asks the oracle about itself, and then deliberately does the opposite of whatever the oracle predicts. If the oracle says “it will stop”, our new program loops forever; if the oracle says “it will loop”, our program immediately stops. Either way the oracle is wrong, which means no such perfect oracle can exist. This kind of self-referential paradox echoes Cantor’s diagonal proof and Gödel’s incompleteness theorems – the same structural trick revealing fundamental limits in three different corners of mathematics within a single decade.

A vintage punch-card stack and a polished slide rule arranged together on a dark wooden desk, lit from one side with soft warm light, evoking the early-twentieth-century mathematical and engineering culture from which Turing's machines emerged
The era when computation became a precise mathematical idea, before any working computer existed

When Physics Inherits the Limit

For a long time, undecidability looked like a curiosity confined to pure logic. Then physicists started showing that ordinary questions about ordinary materials are secretly halting problems in disguise. The most striking example came in 2015, when Toby Cubitt, David Perez-Garcia, and Michael Wolf proved that the spectral gap problem is uncomputable in general. The spectral gap is just the energy difference between a material’s lowest-energy state and its next-lowest. It controls whether the material conducts electricity, whether it absorbs certain photons, whether it can be a useful insulator. The 2015 proof showed that for a perfectly natural family of two-dimensional materials, with a small handful of input numbers describing the chemistry, there is simply no algorithm that can decide whether a gap exists.

The proof works by encoding an entire general-purpose computer inside the lattice’s quantum mechanics. Whether the lattice ends up with a gap or without one depends on whether the embedded computer eventually halts. Since the halting question itself is undecidable, so is the gap question. This is not a statement about how hard the calculation is – the calculation is provably impossible to do correctly for every input, no matter how powerful the future computer. Similar uncomputability results have since been proven for the long-time behavior of certain dynamical systems, for the rules governing certain quantum theories, and for whether certain particle-physics diagrams represent the same process. Uncomputability has bled out of pure logic and into the working life of physics.

Hard, Easy, and the Space Between

Even when a problem can be solved in principle, it may not be solvable in practice. The branch of computer science that studies this is called complexity theory, and it sorts problems into classes by how their difficulty grows as the input gets bigger. The easy class is everything where the work needed grows gently with input size – multiplication, sorting a list, finding the shortest path in a road network. The hard class includes problems where checking a candidate answer is fast, but actually finding the answer seems to require trying so many possibilities that no realistic computer could finish. The traveling salesman problem, scheduling around constraints, and breaking common encryption all fall here. Whether the hard class is secretly equivalent to the easy class – whether some clever shortcut might exist that no one has found yet – is the most famous open question in computer science, one of the seven Millennium Prize problems.

Easy problems nest inside harder ones - and the boundaries are still unknown

Physics generates problems in every class. Simulating a generic quantum system on an ordinary computer requires resources that double for every additional particle you include – a hundred-particle quantum system is already beyond any current computer on Earth, and not by a small margin. Finding the lowest-energy configuration of a general quantum material is among the hardest problems in its class even for a quantum computer. Even fully classical problems can be brutally hard: predicting whether a glassy alloy will settle into a particular configuration in a given time is in the same difficulty class as code-breaking. Real physics teems with this kind of computational difficulty, and that difficulty is often the reason theory takes years to catch up to experiment.

Where Quantum Computers Help

Quantum computers do not solve everything faster. They occupy their own region in the difficulty map, partly overlapping with what ordinary computers can do efficiently and partly extending into harder problems. Peter Shor proved in 1994 that one famously hard task – finding the prime factors of a very large number, the problem behind most modern internet encryption – collapses to easy on a quantum computer, dramatically faster than any classical method. Lov Grover showed in 1996 that searching an unsorted list could be sped up too, though only by a moderate amount. Beyond these, quantum computers are believed to give enormous speedups for simulating quantum systems themselves – which makes physics one of the most natural targets for quantum computing applications.

Quantum waves turn randomness into a pattern that reveals the answer

What quantum computers do not do is sweep away every hard problem at once. Most of the famously hard search and scheduling problems remain hard even with a quantum machine. Grover’s speedup is modest, so cracking standard symmetric encryption goes from impossible to merely very hard. The dramatic quantum speedups exploit very specific mathematical structure – Shor exploits the hidden periodicity of certain arithmetic, quantum simulation exploits the local nature of physical interactions. Neither is a general-purpose magic wand. The boundary between what quantum machines can and cannot accelerate is exactly where serious researchers are now searching for the next breakthrough.

Could Nature Compute More Than We Can?

Turing’s machine is digital and discrete: tape squares hold one symbol at a time, the head moves one step at a time. Real physics involves continuous quantities – positions that can be any value, fields that vary smoothly across space, quantum states that are richer than any list of symbols. Could some continuous physical system do more than any digital computer can ever simulate? This is the question of hypercomputation, and it has a long and contentious history. Certain idealized analog devices, if they could be built, could in principle solve even the halting problem – but such devices require noise-free measurements at infinite precision, which the real world does not seem to allow. Quantum noise, finite measurement precision, and thermal fluctuations all put hard limits on what an actual analog computer can do.

The most provocative argument comes from Roger Penrose. He has argued that human mathematical intuition, particularly the ability to recognize certain self-referential statements as true even though no formal proof of them exists, requires uncomputable processes. He locates those processes in some not-yet-understood quantum-gravitational effect inside the cytoskeleton of neurons. The proposal has been heavily criticized on both physical and philosophical grounds, but the question it raises has not gone away: is the brain – or any natural system – doing something that no algorithm can ever match? The mainstream answer is "almost certainly no", but the question is harder to definitively settle than its answer is to recite.

A vintage brass precision instrument beside a modern circuit board, contrasting analog and digital computation
Infinite precision would let analog machines outrun digital ones – but nature adds noise

Could Universe Itself Be a Computation?

Run the question the other way. Is the universe itself running some kind of program? Konrad Zuse and Edward Fredkin proposed in the 1960s that physics is a digital cellular automaton at very small scales – a vast grid where each cell’s next state depends only on the state of its immediate neighbors. Stephen Wolfram has argued for decades that simple rules of this kind produce the kind of complexity we observe. Modern digital-physics proposals look at the holographic principle and ask whether spacetime itself emerges from quantum error-correcting code. None of these is mainstream physics, but they are not crackpot either – serious researchers in quantum gravity now use quantum information ideas as central tools, and the line between "physics" and "computation" is blurring at the foundational level.

A cellular automaton evolution: a black background with a triangular fractal-like pattern of glowing cyan cells, formed by repeatedly applying a single simple rule to each row to generate the next, showing how complexity emerges from minimal local interactions
A simple local rule, applied row by row, builds something that looks like nature

The simulation hypothesis – the idea that the universe we experience is itself a simulation running on some larger machine – takes this all the way. Most physicists treat it as a metaphysical conjecture rather than science, because there is no clear way to test it. But pieces of the underlying intuition do show up in real research. The information content of a region of space appears to be bounded by the area of its boundary, not its volume – an unexpected result that comes out of black-hole thermodynamics. Certain mathematical equivalences (the so-called holographic dualities) translate gravitational physics in three dimensions into pure quantum information processing on a two-dimensional surface. If physics really is information-bounded, and really is mathematically equivalent to information processing in deep ways, then "the universe is a computer" stops being a metaphor and starts looking more like a working hypothesis.

Why It Changes What Physics Can Promise

The classical scientific bargain was: discover the laws, plug in the initial conditions, predict the future. Computability results force a more careful version of that bargain. Some questions are simply outside the reach of any algorithm, and no advance in computers will fix them. Other questions are answerable in principle but require resources that scale exponentially with system size, putting them out of reach in practice. Physics will keep predicting things, but the catalog of "in principle but not in practice" and "not even in principle" questions is now substantial.

For technology the implications cut both ways. Hard problems are bad news for prediction but good news for cryptography – the entire modern internet is built on problems we strongly suspect have no efficient solution. Quantum speedups threaten some of those constructions and have motivated a serious effort to design new encryption that should resist quantum attack. For artificial intelligence, the question of what kinds of problems machine learning can and cannot eventually solve is closely tied to the same complexity questions. Computability is no longer an obscure footnote in foundations; it is part of the working environment of modern science and engineering.

The Bigger Picture

For four hundred years physics has assumed that nature follows mathematical laws, and that the right job for a physicist is to find them. Computability adds a third question: are the laws solvable? Some are solvable analytically, some are solvable only numerically with bounded effort, some require exponential resources, and some are literally unsolvable by any algorithm. The map of physics no longer has only "known" and "unknown" regions. It has a third territory: known to be unknowable.

That third territory is not failure. It is information about the structure of nature, just as Gödel’s incompleteness theorems were information about the structure of mathematics. We learn something about a system when we prove that no shortcut exists for understanding it – we learn that the system itself is the shortest description of itself, that running it forward is the only way to know what it does. Some physicists now argue that this is the right way to think about consciousness, about evolution, about the long-term behavior of climate systems. Compute it forward, watch what happens, and accept that no shorter route will ever be found. Computability is, in that view, one of the deepest things physics has discovered about itself.

Everything connects to almost everything else

An unhandled error has occurred. Reload ðŸ-™