Good story and I absolutely believe this to be true. It just seems like the most logical thing. I mean, as soon as any civilization ever can simulate a universe, or even a galaxy, they would. And they would do it infinite times. So the chances that we're the original are infinite to 1. Doesn't mean our lives aren't real though and we should change our behavior. We still experience joy. So experience as much joy as you can before your simulation ends =)
I don't believe the part about all the simulations being linked though. I don't see why that would have to be the case. Our simulation could have "started" 1 second ago and we wouldn't know it.
First of all, the chance that you are a simulation does not depend on the number of simulations in existence. In fact, there is no such chance, as there is no random distribution process you are observing, except you claim that every observer has one of finitely many souls somehow assigned to them. Such things belong into the realm of religion. But even then, either you are real or not. If you are not, there is no reason to assume that you exist in reality.
And then there is the fundamental laws of nature. If our universe is indeed a simulation, then it is either just an approximation of reality, or reality follows different laws of nature. In that case, again, it makes no sense to think of oneself as a random choice between one reality and many simulations, as oneselve's existence depends on one specific kind of universe.
A physically random process isn’t needed for probabilities to make sense.
Further, suppose a computer agent knew that it would soon have 6 copies of itself (including current state) would be spun up, each in a different vm, and for each vm, a different simulated environment, but that the current copy of itself would also continue running.
While each of the simulated environments differ from each-other and the true/outer environment, they don’t differ in a way that can quickly be detected.
Different actions in the simulated environments and the “real”/original environment, would have different effects in the original environment, effects which the agent cares about.
In order to best produce outcomes in accordance with what the agent cares about, how should the agent act?
I think it makes sense for the agent to act as if there is a 1/7 chance that it is in the original environment, and a 6/7 chance it is in one of the 6 simulated environments.
How could it be otherwise?
Suppose the original, and each of the vm copies, is given a choice between two options, X or Y, where if it chooses X, then if it is the original, then it gets +m utility of benefit in the original environment , but if it is one of the copies in a vm, instead it gets -n utility of benefit in it cares about in the original environment. If it chooses Y, then there is no change in the reward.
The combination of values of m and n which combine to result in it making sense for to choose X, are exactly those such that would make it make sense assuming it has a 1/7 chance of being the original and 6/7 chance of being in a vm.
That being said, I don’t think we are in a simulation. I just don’t think that the concept of “assigning a probability other than 0,1 or 1/2 to being in a simulation” is always unreasonable in all conceivable circumstances.
I just happen to think that it is highly unlikely for us.
Your contrived example starts with the very premise of a distribution. An agent gets copied. There are 7 variants. You make an experiment and argue that for 6 of the 7 variants your strategy is successful. All builds on the premise of a distribution, because otherwise the notion of chance has no meaning. Note, that the distribution must not be random, but it must exist.
Take the perspective of your agent: There is no way to learn about the number of other agents, and this number is absolutely central to your argument. Every agent will at some point notice that a specific strategy is successful. It will appear like a universal law of nature. For each agent there is no chance involved, it is 100% predetermined what the correct strategy is.
I was imagining that it was informed of the number of copies that would be made beforehand . Also, that the different copies couldn’t tell by the results of their choices, whether they were in the original environment or not. So, there is some best strategy in each env, but because they can’t tell which env they are in, they can’t actually have different strategies.
Saying there is no chance involved is like saying that if John rolls a fair 6 sided die under a cup, and you and Jane don’t know the result, and Jane offers to bet you at some odds that the die is showing the number 4, that there is no probability involved because the value of the die has already been determined. Ok, sure, it is already determined in the world, but one should still assign probability 1/6 the the die shows a 4.
You have to make the pretty big assumption that it’s even possible to simulate a universe within the universe. Such an assumption is basically like assuming we could move faster than light, or reverse entropy. It seems unlikely to me.
You wouldn't need to simulate the entire universe in perfect detail - just the parts users are interacting with, and just to sufficient detail that they didn't notice anything amiss. This is exactly how we render games.
A neat trick could be to avoid simulating the internal computer by recognizing that it is running the same computation, and instead of simulating a computer running the same computation, just use the same information you are already computing, and copy it over where it is relevant to the on-screen output.
So, essentially replacing the simulation with an oracle of sorts for the rest of the world they are in.
This would allow the internal simulation to be just as detailed, because it would just be re-using the same results.
Of course, that only works if it is a simulation of exactly the same world.
I suppose if the simulation was the same except for a small intervention, then you could maybe use what you are already computing as a starting point, and then compute the difference that result from the intervention.
But, because of chaos stuff, I imagine that that would quickly spiral out to the point where it wouldn’t save much computation.
If you are willing to fudge things though, you could make it so that the results of the interventions are subtly adjusted in order to make the differences that result from them not effect much, and where they would be negligible, make them zero,
and with the recursive nature of it, could fudge the results to make it go to a fixed point (or cycle) more quickly, so that you don’t end up having infinitely many distinct levels.
Edit: of course, I don’t think this could be done in our universe, for our universe, or even for one very similar to our universe. We might be able to do something similar for a much much simpler world. And if we specifically hard code in a part of the world which represents “a computer simulating this world”, then that’s no issue, though it isn’t particularly satisfying/surprising. It isn’t a surprise that a videogame can have a scaled down copy of its window drawn as a texture on some object in the game.
Also, this version wouldn’t really suggest the “infinitely many copies are in the fixed point part, and only finitely many are above that part of the sequence” thing, because the chain would just stop once it becomes cyclic or a fixed point.
Also, there might be like, fundamental issues preventing detecting whether a computation in the simulation is simulating the same thing? Quine-ing is possible, so the issue isn’t representing the same code, but “detecting whether some process is equivalent to running some code” seems like it might be an issue.
Like, with Rice’s theorem or something.
Like, when simulating a game of life world, is it possible to automatically eventually detect all parts of it that are doing something equivalent to, e.g. enumerating primes?
Like, all parts for which there is a fast algorithm for computing that part of it using the sequence of primes and visa versa?
Or, eventually finding all such parts that last indefinitely and aren’t interrupted/broken .
Actually, I’m guessing yes, that should be possible. Dovetail together all “fast algorithms” that attempt to predict parts of the state using the list of primes, (and where the primes can also be quickly computed using that state through another “fast algorithm”), and then only keep the ones that are working.
All the ones that eventually stop working should be eventually ruled out, and the dovetail process should eventually find all the ones that do work.
Ah, but wait, that doesn’t result in conclusively deciding “yes, there is one here”, only giving candidates.
Well, still, for fast translations between the states of the simulated prime finding machine, and lists primes, I expect there should be a proof that,
Err, wait, no.
Well, kinda.
If the machine doesn’t just enumerate primes, but instead enumerates primes, but after the 2^(2^n) -th prime, looks at the n-th candidate for a proof of a self-contradiction in ZFC, and if it is a valid proof of a self-contradiction, messes stuff up and no longer computes primes,
(Or maybe instead of checking all of the n-th candidate, checks one step of the current candidate),
then ZFC can’t prove that this will always produce primes.
We could maybe say that this doesn’t count as doing the same computation, because there’s no sufficiently good correspondence between the states and the sequence of primes, but, eh.
On the other hand, if one settles for merely high confidence that some particular process in the simulation is computing whatever program, before replacing it with something that just gets the results of that computation from outside, you could probably use logical induction? Or, use logical induction for “it can be accurately predicted by this at least until time t in the simulation”, and then use the simplification until time t.
Of course, that’s not practical, because the current best known logical induction algorithm is much too slow. But theoretically.
Currently accepted models of physics, aiui, don’t permit unlimited computation within a bounded space and bounded time?
Err, wait, I was going to cite Bremermann's limit, but that is for if moving from one quantum state to an orthogonal quantum state? Maybe that doesn’t rule it out completely if the computation is done in a way that doesn’t involve enough rotation of states to be able to distinguish?
Ok, but I expect that with enough math that loophole could be ruled out.
Is there a consensus (or most popular) theory for how mind/consciousness would work in the simulation theory? How do things in a simulation become subjects of experience?
Conciousness is an emergent property of a physical brain firing neurotransmitters between synapses, just as an economy is an emergent property of society buying and selling things, or ant colony behaviour is an emergent property of the behaviour of individual ants for example. It's not somehow "above" the simulation, just a higher order of complexity defined by lower-level rules, similar to a glider in Conway's Game of Life.
That's one view all the reductionists gladly use. I do not disagree with this, but you need to also take into accounts numerous events all throughout the human existence, where this apparent "emergent property" breaks all the realms of so called "reality" and lets one witness something which, subjectively, seems above the simulation.
Maybe it is in an emergent property, but maybe it emerges not just due to the complexities of an individual being, but something else which is just missing from our current understanding and we don't know about.
Forget about simulation. We do not have answers to basic questions how consciousness works in real world. For example, we have no idea how what we perceive as time flow appears, we do not know why many/most people feel that they have free will when equations of physics are fully determined.
Of course, it would just seem that (and this is in answer to another of the replies to my question, too) if you don’t find emergentism persuasive, then you also have to reject the simulation theory (?)
A notion of simulation implies control and, as such, free will and time flow. And since we do not know about the latter, the notion of simulation itself is a pure speculation.
I don't believe the part about all the simulations being linked though. I don't see why that would have to be the case. Our simulation could have "started" 1 second ago and we wouldn't know it.