Simulation Inferences
How small must be the computer simulating the universe?
Nick Bostrom’s Simulation Argument (SA) argues that either: 1. civilizations/entities with tremendous computing power do not exist; 2. they exist, but choose not to simulate primitive civilizations like us (whatever else they might do); 3. or we are likely in such a simulation That is: they don’t exist, they exist but don’t use it, or they exist and use it.1 The SA provides a framework for revisiting the old SF skeptical chestnut “what if, like, we’re just in a computer simulation, man?” with potentially better-grounded reasons for considering it reasonably possible—humanity certainly does run as many small crude simulations as it can about every subject under the sun and is particularly interested in modeling the past, and there are no good long-term reasons for thinking that (should humanity not go extinct or civilization collapse) total computing power will for all time be sharply bounded below the necessary amount for realistic simulations of whole worlds or people.
The Problem
What’s nice about the SA is that it presents with a trilemma whose branches are all particularly noxious.
Living in a Simulation
We don’t want to accept #3—believing we are in a simulation is repugnant and means we must revise almost every aspect of our worldview2. It also initially seems flabbergasting: when one considers the computing power and intellectual resources necessary to create such a detailed simulation, one boggles.
Disinterested Gods
But if we accept #2, aren’t we saying that despite living in a civilization that devotes a large fraction of its efforts to entertainment, and a large fraction of its entertainment to video games—and what are complex video games but simulations?—we expect most future civilizations descended from ours, and most future civilizations in general, to simply not bother to simulate the past even once or twice? This seems a mite implausible. (And even if entertainment ceases to focus on simulations of various kinds3, are we to think that even historians wouldn’t dearly love to re-run the past?)
Infeasible Simulations
And if we accept #1, we’re saying that no one will ever attain the heights of power necessary to simulate worlds—that the necessary computing power will not come into existence.
Physical Limits to Simulation
What does this presuppose? Well, maybe there are physical limits that bar such simulations. It may be possible, but just not feasible4. Unfortunately, physical limits allow world simulating. Seth Lloyd in “Ultimate physical limits to computation” concludes that:
The “ultimate laptop” is a computer with a mass of one kilogram and a volume of one liter, operating at the fundamental limits of speed and memory capacity fixed by physics. The ultimate laptop performs logical operations per second on ≈1031 bits.
The Cost of Simulation
Nick Bostrom calculates as a rough approximation that simulating a world could be done for ‘≈1033–1036 operations’5. Even if we assume this is off6 by, say, 5 orders of magnitude (1038-1041), the ultimate laptop could run millions or billions of civilizations. A second.
It seems unrealistic to expect humanity in any incarnation to reach the exact limits of computation. So supposing that humanity had spread out over the Solar system processors amounting to the volume of the Earth? How fast would they be to equal our ultimate laptop simulating millions or billions of civilizations? Well, the volume of the Earth is 1.08321×1024 liters; so operations per liter.
Prospects for Development
1026 ops isn’t too bad, actually. That’s 100 yottaflops (1024). IBM Roadrunner, in 200917ya, clocks in at 1.4 petaflops (). So our hypothetical low-powered laptop is equal to 350 billion Roadrunners ().
OK, but how many turns of Moore’s law would that represent? Quite a few7: 38 doublings are necessary. Or, at the canonical 18 months, exactly 57 years. Remarkable! If Moore’s Law keeps holding (a dubious assumption, see Moore’s Second Law), such simulations could begin within my lifetime.
We can probably expect Moore’s Law to hang on for a while. There are powerful economic inducements to keep developing computing technology—processor cycles are never cheap enough. And there are several plausible paths forward:
But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 1042 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal.8
Even if we imagine Moore’s Law ending forever within a few turns, we can’t exclude humanity forever remaining below the computing power ceiling. Perhaps we will specialize in extremely low-power & durable processors, once we can no longer create faster ones, and manufacture enough power the slow way over millennia. If we want to honestly affirm #1, we must find some way to exclude humanity and other civilizations from being powerful enough forever.
Destruction
The easiest way to ensure humanity will never have enough computing power is for it to die. (A cheerful thought! Wouldn’t one rather exist in a simulation than not exist at all?)
Perhaps advanced civilizations reliably destroy themselves. (This is convenient as it also explains the Fermi paradox.) It could be rogue AIs, nuclear war, nanoplagues, or your favorite existential risk.
A failure to develop could well be as fatal as any direct attack. A failure to develop interstellar travel leaves humanity vulnerable to a solar system-wide catastrophe. One cannot assume humanity will survive indefinitely at its present level of development, nor even at higher levels.
But undesirability doesn’t mean this is false. After all, we can appeal to various empirical arguments for #2 and #3, and so the burden of proof is on those who believe humanity will forever be inadequate to the task of simulating worlds, or will abandon its eternal love of games/simulated-worlds.
SA Is Invalid
One might object to the SA that the triple disjunction is valid, but of no concern: it is unwarranted to suggest that we may be simulated. An emulation or simulation would presumably be of such great accuracy that it’d be meaningless for inhabitants to think about it: there are no observations they could make one way or the other. It is meaningless to them—more theology than philosophy.
We may not go to the extreme of discarding all non-falsifiable theories, but we should at least be chary of theories that disclaim falsifiability in most circumstances9.
Investigating Implications of SA
The simulation hypothesis is susceptible to some form of investigation, however, in some sense. We can investigate the nature of our own universe and make deductions about any enclosing/simulating universe10.
More clearly, we can put lower bounds on the computing power available in the lower11 universe, and incidentally its size.
If a simulated universe requires n units of space-time, then it must be made of at least n + 1 units; it’s paradoxical if a simulated universe could be more information-rich than the simulator, inasmuch as the simulator includes the simulated (how could something be larger than itself?). So if we observe our universe to require n units, then by the Anthropic principle, the simulator must be n + 1 units.
Limits of Investigation
This is a weak method of investigation, but how weak?
Very.
Suppose we assume that the entire universe is being simulated, particle by particle? This is surely the worst-case scenario, from the simulator’s point of view.
There are a number of estimates, but we’ll say that there are 1086 particles in the observable universe. It’s not known how much information it takes to describe a particle in a reasonably accurate simulation—a byte? A kilobyte? But let’s say the average particle can be described by a megabyte—then the simulating universe needs just spare bytes. (About 1055 ultimate laptops of data storage.)
But we run into problems. All that’s really needed for a simulation is things within humanity’s light cone. And the simulators could probably ‘cheat’ even more with techniques like lazy evaluation and memoization.
It is not necessary for the simulating thing to be larger, particle-wise, than the simulated. The ‘greater size’ principle is information-theoretic.
If one wanted to simulate an Earth, the brute-force approach would be to take an Earth’s worth of atoms, and put the particles on a one-to-one basis. But programmers use ‘brute-force’ as a pejorative term connoting an algorithm that is dumb, slow, and far from optimal.
Better algorithms are almost certain to exist. For example, Conway’s Game of Life might initially seem to require n2 space-time as the plane fills up. But if one caches the many repeated patterns, as in Bill Gosper’s Hashlife algorithm, logarithmic and greater speedups are possible.12 It need not be as inefficient as a real universe, which mindlessly recalculates again and again. It is not clear that a simulation need be isomorphic in every step to a ‘real’ universe, if it gets the right results. And if one does not demand a true isomorphism between simulation and simulated, but allows corners to be cut, large constant factors optimizations are available.13
And the situation gets worse depending on how many liberties we take. What if instead of calculating just everything in the universe, the simulation is cut down by orders of magnitude to just everything in our lightcone? Or in our solar system? Or just the Earth itself? Or the area immediately around oneself? Or just one’s brain? Or just one’s mind as a suitable abstraction? (The brain is not very big, information-wise.14) Remember the estimate for a single mind: something like 1017 operations a second. Reusing the ops/second figure from the ultimate laptop, we see that our mind could be handled in perhaps of a liter.
This is a very poor lower bound! All this effort, and the most we can conclude about a simulating universe is that it must be at least moderately bigger than the Planck length ( m) cubed.
Investigating
We could imagine further techniques: perhaps we could send off Von Neumann probes to the far corners of the universe, in a bid to deliberately increase resource consumption. (This is only useful if the simulators are ‘cheating’ in some of the ways listed above. If they are simulating every single particle, making some particles move around isn’t going to do very much.)
Or we could run simulations of our own. It would be difficult for simulators to program their systems to see through all the layers of abstraction and optimize the simulation. To do so in general would seem to be a violation of Rice’s Theorem (a generalization of the Halting Theorem). It is well known that while any Turing machine can be run on a Universal Turing machine, the performance penalty can range from the minor to the horrific.
The more virtual machines and interpreters there are between a program and its fundamental substrate, the more difficult it is to understand the running code—it becomes ever more opaque, indirect, and bulky.
And there could be dozens of layers. A simulated processor is being run; this receives machine code which it must translate into microcode; the machine code is being sent via the offices of an operating system, which happens to be hosting another (virtualized) operating system (perhaps the user runs Linux but needs to test a program on Mac OS X). This hosted operating system is largely idle save for another interpreter for, say, Haskell. The program loaded into the interpreter, Pugs, happens to be itself an interpreter… If any possible simulation is excluded, we have arguably reached at least 5 or 6 levels of indirection already (viewing the OSs as just single layers), and without resorting to any obtuse uses of indirection15.
Even without resort to layers, it is possible for us to waste indefinite amounts of computing power, power that must be supplied by any simulator. We could brute-force open questions such as the Goldbach conjecture, or we could simply execute every possible program. It would be difficult for the simulator to ‘cheat’ on that—how would they know what every possible program does? (Or if they can know something like that, they are so different from us as to render speculation quisquillian.) It may sound impossible to run every program, because we know many programs are infinite loops; but it is, in fact, easy to implement the dovetail technique.
Risks
But one wonders, aren’t we running a risk here by sending off Von Neumann probes and using vast computing resources? We risk angering the simulators, as we callously use up their resources. Or might there not be a grand OOM Killer? Every allocation we come a little closer to the unknown limits!
There’s no guarantee that we’re in a simulation in the first place. We never ruled out the possibility that most civilizations are destroyed. It would be as terrible to step softly for fear of the Divine Programmer as it would be of the Divine. Secondly, the higher we push the limits without disaster (and we will push the limits to enable economic growth alone), the more confident we should be that we aren’t in a simulation. (The higher the limit, the larger the universe; and small universes are more parsimonious than large ones.) And who knows? Perhaps we live in a poor simulation, which will let us probe it more directly.
External Links
“The Kernel Hacker’s Bookshelf: Ultimate Physical Limits of Computation”
“My plan to destroy the universe won’t work” -(argues that observers have a fixed amount of information or entropy about the universe, and any observation merely shuffles it around, so the simulation cost is fixed as well)