The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.
But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.
Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.
And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim reality would be toast overnight.
What am I not getting about this?
Cheers!
If our simulated universe’s framerate drops because of the extra compute required for the nested simulations we’re running, would we even notice? It stands to reason that everything would slow down, including our perception of the universe.
For all we know, the smallest unit of time we can measure in our simulated existence could take an hour or more to render outside the simulation. To us, it’s nearly instantaneous.
EVE Online (Video Game) uses a similar technology to handle large fights with thousands of players.
Or the frame quality drops, and we’re all Jerry. “My man!”
yes 👉
Slow down!
Looking good :)
Just watch for graphics tearing. On a completely unrelated note, why are earthquake zones so heavily populated?
It’s a scenario that Neal Stephenson covers in his book “Fall; or, Dodge in Hell”. Interesting read, although it’s one of my least favorite books of him and I liked the first book in the “Dodge” series a lot better.
Cool, I’ll have to add that to my list. Thanks for the recommend!
But if the real world sets up a simulated world which more or less perfectly simulates itself
This is the crux of the logical error you made. It’s a common error, but it’s important to recognize here.
If we’re in a simulation, we have no idea the available resources in the simulation “above” us. Suppose energy density up there is 100x as high as ours?Suppose the subjective experience of the passage of time up there is 100x faster than ours?
Another thing is that we have no idea how long it takes to render each frame of our simulation. Could take a million years. As long as it keeps running though, and as long as the simulation above us is patient, we keep ticking. This is also where the subjective experience of time matters. If it takes a million years, but their subjective “day” is a trillion years long, it becomes feasible to run us for a while.
And, finally, there’s no reason to assume we’re a complete simulation of anything. Perhaps the simulation was instantiated beginning with this morning–but including all memories and documentation of our “historical” past. All that past, all that experience is also fake, but we’d never know that because it’s real to us. In this scenario, the simulation above us only has to simulate one day. Or maybe even just the experiences of one PERSON for one day. Or one minute. Who knows?
The main point is we don’t know what’s happening in the simulation above ours, if it exists, but there’s no reason to assume it’s similar to ours in any way.
Quantum is weird. If we are in a simulation, that would explain a lot of that, because the quantum effects we see are actually just light simulations of much deeper mechanics.
As such, if we were simulating a universe, there’s every chance that we may decide to only simulate down to individual atoms. So the people in the simulation would probably discover atoms, but then they would have to come up with their own version of quantum mechanics to describe the effects that we know come from quarks.
The point is that each layer may choose to simulate things slightly lighter to save on resources, and you would have no way of knowing.
Indeed and–interesting corrollary–if we accept the concept of reduced accuracy simulations as axiomatic, then it might be possible to figure out how close we are to the “bottom” of the simulation stack that’s theoretically possible. There’s only so many orders of magnitude after all; at some point you’re only simulating one pixel wiggling around and that’s not interesting enough to keep going down.
There is not, as far as I know, any way to estimate the length of the stack in the other direction, though.
I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.
To be clear, I’m not arguing that is is evidence, i merely arguing that it could be a result of how they chose to render our simulation. And just because it’s more computationally expensive on our side does not necessarily mean it’s more expensive on their side, because we don’t know what the mechanics of the deeper layer may have been.
For example, it would be a lot less computationally expensive to render accuracy in a simulation for us down to cellular level than it would be down to atomic scale. From there, we could simply replicate the rules of how molecules work without actually rendering them, such as “cells seem to have a finite amount of energy based on food you consume, and we can model the mathematics of how that works, but we can’t seem to find a physical structure that allows that to function”
As others have said, our reference of time comes from our own universe’s rules.
Ergo if rendering 1 second of our time took 10 years of their time, we wouldn’t measure 10 years, we’d measure 1 second, so we’d have no way of knowing.It’s worth remembering that simulation theory is, at least for now, unfalsifiable. By it’s nature there’s always a counterargument to any evidence againat it, therefore it always remains a non-zero possibility, just like how most religions operate.
You’re thinking in terms of how we do simulations within our universe. If the universe is a simulation then the machine that is simulating it is necessarily outside of the known universe. We can’t know for sure that it has to play by the same rules of physics or even of logic and reasoning as a machine within our universe. Maybe in the upper echelon universe computers don’t need power, or they have infinite time for calculations for reasons beyond our understanding.
But that’s just a guess. It’s not necessarily true. You’re just saying “simulations might be possible, therefore they are definitely possible, therefore we are likely in a simulation”.
That’s not logically sound. You can replace “simulation” with “God” and prove the existence of God similarly. It’s just a guess.
Yes thats why no one says its a fact, its a theory
I’d just like to interject for a moment. What you’re refering to as “theory”, is in fact, a “hypothesis”…
Some might say a game theory
Or what if the entity that stimulates can just “dream” the simulation to make it happen?
Like Azathoth?
I think there are a few tricks that still make it possible. First, nothing says that you have to, or really that you can simulate a universe 1:1. When you think of it we already simulate millions of universes in video games, but they are dramatically simpler than our reality. So, our parent reality could be much more complex than our own.
Consequently, physics could be vastly different from one layer to another. Maybe in the real reality, entropy isn’t that significant and quasi-perpetual motion is possible, making energy super cheap. Maybe the limits in our universe like the speed of light and Planck constants are just hardware caps to prevent us from using too much compute.
You are correct, but missed one important point, or actually made an important wrong assumption. You don’t simulate a 1:1 version of your universe.
It’s impossible to simulate a universe the size of your own universe, but you can simulate smaller universes, or to be more accurately, simpler universes. Think on videogames, you don’t need to simulate everything, you just simulate some things, while the rest is just a static image until you get close. The cool thing about this hypothetical scenario is that you can think of how a simulated universe might be different from a real one, i.e. what shortcuts could we take to make our computers be able to simulate a complex universe (even if smaller than ours).
For starters you don’t simulate everything, instead of every particle being a particle, which would be prohibitively expensive, particles smaller than a certain size don’t really exist, and instead you have a function that tells you where they are when you need them. For example simulating every electron would be a lot of work, but if instead of simulating them you can run a function that tells you where they are at a given frame of the simulation you can act accordingly without having to actually simulate them. This would cause weird behaviors inside the simulation, such as electrons popping in and out of existence and teleporting over gaps smaller than the radius of your spawn_electron function, which in turn would impose a limit to the size of transistors inside that universe. It would also cause it so that when you fire electrons through a double slit they would interact with one another, because they’re just a function until they hit anything, but if you try to measure which slit they go through then they’re forced to collapse before that and so they don’t interact with one another. But that’s all okay, because you care about macro stuff (otherwise you wouldn’t be simulating an entire universe).
Another interesting thing is that you probably have several computers working on it, and you don’t really want loading screens or anything like that, so instead you impose a maximum speed inside the simulation, that way whenever something goes from one area of the simulation to the next it will take enough time for everything to be “ready”. It helps if you simulate a universe where gravity is not strong enough to cause a crunch (or your computers will all freeze trying to process it). So your simulated universe might have large empty spaces that don’t need that much computational power, and because traveling through them takes long enough it’s easy to synch the transition from one server to the next. If on the other hand maximum speed was infinite you could have objects teleporting from one server to the next causing a freeze on those two which would leave them out of synch with the rest.
And that’s the cool thing about thinking how a simulated universe would work, our universe is weird as fuck, and a lot of those weirdness looks like the type of weirdness that would be introduced by someone trying to run their simulation cheaper.
My mind is blown. This is very well written. Thank you
So we’re getting Truman Show’ed, but on a scale assumed to be beyond our capability to investigate.
First, this is not really science so much as it is science-themed philosophy or maybe “religion”. That being said, to make it work:
-
We don’t have anyway of knowing the true scale and “resolution” of a hypothetical higher order universe. We think the universe is big, we think the speed of light is supremely fast, and we think the subatomic particles we measure are impossibly fine grained. However if we had a hypothetical simulation that is self-aware but not aware of our universe, they might conclude some slower limitation in the physics engine is supremely fast, that triangles are the fundamental atoms of the universe, and pixels of textures represent their equivalent of subatomic particles. They might try to imagine making a simulation engine out of in-simulation assets and conclude it’s obviously impossible, without ever being able to even conceive of volumetric reality with atoms and subatomic particles and computation devices way beyond anything that could be constructed out of in-engine assets. Think about people who make ‘computers’ out of in-game mechanics and how absurdly ‘large’ and underpowered they are compared to what we would be used to. Our universe could be “minecraft” level as far as a hypothetical simulator is concerned, we have no possible frame of reference to gauge some absolute complexity of our perceived reality.
-
We don’t know how much we “think” is modeled is actually real. Imagine you are in the Half Life game as a miraculously self-aware NPC. You’d think about the terribly impossibly complex physics of the experiment gone wrong. Those of us outside of that know it’s just a superficial model consisting of props to serve the narrative, but every piece of gadget that the NPC would see “in-universe” is in service of saying “yes, this thing is a real deep phenomenon, not merely some superficial flashes”. For all you know, nothing is modeled behind you at anything but the most vague way, every microscope view just a texture, every piece of knowledge about the particle colliders is just “lore”. All those experiments showing impossibly complex phenomenon could just be props in service of a narrative, if the point of the simulation has nothing to do with “physics” but just needs some placeholder physics to be plausible. The simulation could be five seconds old with all your memories prior to that just baked “backstory”.
-
We have no way of perceiving “true” time, it may take a day of “outside” time to execute a second of our time. We don’t even have “true” time within our observable universe, thanks to relativity being all weird.
-
Speaking of weird, this theory has appeal because of all the “weird” stuff in physics. Relativitiy and quantum physics are so weird. When you get to subatomic resolution, things start kind of getting “glitchy”, we have this hard coded limit to relative velocity and time and length get messed up as you approach that limit. These sound like the sort of thing we’d end up if we tried simulating, so it is tempting to imagine a higher order universe with less “weirdness”.
Just to spin this a bit further, if we are living in a simulation, does it have a purpose? Sometimes I ask myself if the purpose of such a simulation for humanity could be to see how long it takes from the big bang to the creation of artificial life. Maybe our purpose is to create such artificial life that can travel to the stars, because as humans we are not really fit to do that. Maybe we are a mere step on the ladder of our universe’s purpose.
Such a purpose would inform the constraints. If we are just “the sims” on steroids, then all the deep physics are absolutely utterly faked and we are just “shown” convincing fakery. If it’s anthropological, then similar story that the physics are just skin deep. If it’s actually modeling some physics thing, then maybe we are “observing” real stuff.
But again, this is all just for fun. It’s not vaguely testable and thus not scientific despite the sciencey theme of it, just something to ponder.
-
It’s simple - you cheat. In computer games we only draw the things you are looking at, and we only give the appearance of simulating the whole thing but the ‘world’ or universe is actually very limited and you can’t visit most places. Sound familiar?
The fun thing about this is that we have evidence that this is how our reality works. The double slit experiment showed that particles change their behavior when observed. (Gross oversimplification and only under very specific circumstances but still extremely fascinating.)
Even that requires overhead
The real problems would be x^m computational issues. A finite number of ai running around on a finite amount of space are linear problems. Basically, very possible
Yes, but not even close to as much as the alternative.
I don’t think you can approximate Turing complete algorithms though. And then you end up with a situation where the simulation is making these Turing machines out of other simulated components, so it’s even more overhead then just giving the simulated agents direct CPU time.
Turing test has been passed by ai just recently as it happens. Our computational load is trivial in the scheme of things
“Turing Completeness” != “Turing Test”
I’ve always thought the sysadmin of our simulation must be really pissed that we keep inventing better and better telescopes.
The JWST probably cost him a weekend adding more nodes to the cluster.
More likely they were way ahead of that by setting the draw distance as the speed of light.
and setting the speed off light.
Time doesn’t have to be 1:1 between a host and a simulation. The host can take as long as it wants to render the next step in a simulation, and any observers within the simulated universes would not be able to discern the choppiness of their flow of time.
Fellow macaque here. Not only that, but time does not even run 1:1 between 2 places in our own universe. Plus, there are all kinds of quantum fuckery, where we can’t really detect all the properties of a certain particle, or the particles act like waves as long as they do not interact with anything, because… who knows?
Particles and waves aren’t actually separate as we were taught in school. They are in reality a third thing with properties of both.
As for detecting properties that’s a limit of our technology not the universe. In order to observe something we currently have to interact with it (e.g. bounce some light off it) It’s possible in the future we develop techniques that don’t require interaction, like reading the higgs boson field directly for example.
If our technology is limited so we can never see beyond something, why even propose it exists? Bell’s theorem also demonstrates that if you do add hidden parameters, it would have to violate Lorentz invariance, meaning it would have to contradict with the predictions of our current best theories of the universe, like GR and QFT. Even as pure speculation it’s rather dubious as there’s no evidence that Lorentz invariance is ever violated.
My issue it is similar: each “layer” of simulation would necessarily be far simpler than than the layer in which the simulation is built, and so complexity would drop down exponentially such that even an incredibly complex universe would not be able to support conscious beings in simulations within only a few layers. You could imagine that maybe the initial universe is so much more complex than our own that it could support millions of layers, but at that point you’re just guessing, as we have no reason to believe there is even a single layer above our own, and the whole notion that “we’re more likely to be an a simulation than not” just ceases to be true. You can’t actually put a number on it, or even a vague description like “more likely.” it’s ultimately a guess.
Who says there’s resource requirements in the physics of the upper levels?
If someone has the resources to simulate a universe, they probably have the resources to stimulate an arbitrarily large number of universes. This also assumes that any civilisation within the stimulated universe reaches the level of technological advancement required to make a universe level simulation. We’re talking, probably, whole networks of Matrioska Brains, that sort of thing.
Not an answer to your question , but:
What if only one person is being simulated in full, and everyone else is just simulated for the moment they interact with that original sim? That would mean only one of us on this thread is the OG sim, and the rest of us only exist because we are/were going to interact here, and now.
i smoke too much weed for this topic
In the off chance I only exist to argue with you on the Internet, I feel like it’s my duty to say you’re wrong and have nothing to back up my viewpoint because the resources weren’t allotted to have any supported data.
I hope I exist tomorrow.
Nuh uh
Just make sure to reply again to prevent being garbage collected.
Simple; take a picture of yourself to hold a circular reference.
I used to do that at parties. I’d find someone really drunk and say, “I don’t know why you picked me to say this, but we’re not real, you’re the only one that’s real, and we’re all afraid you’re about to wake up and we’ll all disappear.”
You are in a coma. We’re trying a new technique to communicate with you. We aren’t sure where or when this message will appear to you. You’ve been in a coma for 20 years. Please wake up. We miss you.