Thank you Ape, this sounds right.
Davey
I don’t understand. We should entertain the possibility because it is clearly possible (since it’s unfalsifiable), because I care about it, because it can dictate my actions, etc. And the probability argument follows after specifying a reference class, such as “being distinct” or “being a presumptuous philosopher.”
You are misinterpreting the PP example. Consider the following two theories:
T1 : I’m the only one that exists, everyone else is an NPC
T2 : Everything is as expected, I’m not simulated.
Suppose for simplicity that both theories are equally likely. (This assumption really doesn’t matter.) If I define Presumptuous Philosopher=Distinct human like myself=1/(10,000) humans, then I get in most universes, I am indeed the only one, but regardless, most copies of myself are not simulated.
I don’t appreciate your tone sir! Anyway, I’ve now realized that this is a variant on the standard Presumptuous Philosopher problem, which you can read about here if you are mathematically inclined: https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u#1__Proportion_of_potential_observers__SIA
Thank you Anon User. I thought a little more about the question and I now think it’s basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:
T1 : I’m the only real observer
T2: I’m not the only real observer
For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated.
For the SSA, the ratio is instead 10,000:1, so in most universes in the “multiverse of possibilities”, I am the only real observer.
So it’s just a typical Presumptuous Philosopher problem. Does this sound right to you?
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.
I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so ‘lucky’ as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I’m distinct | I’m in a simulation) since there’s some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?
Good questions. Firstly, let’s just take as an assumption that I’m very distinct — not just unique. In my calculation, I set Pr(I’m distinct | I’m not in a simulation)=0.0001 to account for this (1 in 10,000 people), but honesty I think the real probability is much much lower than this figure (maybe 1 in a million) — so I was even being generous to your point there.
To your second question, the reason why, in my simulator’s earth, I imagine the chance of uniqueness to be larger is that if I’m in a simulation then there could be what I will call “NPCs.” People who seem to exist but are really just figments of my mind. (Whereas the probability of NPCs existing if I’m not in a simulation is basically 0.) At least that’s my intuition. There might even be a way of formalizing that intuition; for example, saying that in a simulated world, the population of earth is an upper bound on the number of “true observers” vs NPCs, whereas in the real world, everyone is a “true observer.” Is there something wrong in this intuition?
My argument didn’t even make those assumptions. Nothing in my argument “falsified” reality, nor did I “prove” the existence of something outside my immediate senses. It was merely a probabilistic, anthropic argument. Are you familiar with anthropics? I want to hear from someone who knows anthropics well.
Indeed, your video game scenario is not even really qualitatively different from my own situation. Because if I were born with 1000 HP, you could still argue “data from within the ‘simulation’...is not proof of something ‘without’.” And you could update your “scientific” understanding of the distribution of HP to account for the fact that precisely one character has 1000 HP.
The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I’m not in a video game) < Pr(I’m a superlative | I’m not in a simulation), though both probabilities are very low.
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
But, on second thought, why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?” It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I’m distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I’m distinct | I’m not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.