If we are the ancestors who give rise to the simulators, then we will be of extreme interest to simulate, based on our own activities, which spend an enormous amount of effort modeling data collected in the past (ie. simulations), and so there will be a lot of simulations. And if we are not those ancestors (or already a simulation thereof), but some totally unconnected hypothetical universe (eg. some experiment in exploring carbon-based evolution by a civilization which actually evolved as sulfuric-acid silicon lifeforms and are curious about the theoretical advantages of carbon/water as a substrate), then the very fact that, out of the infinitely large number of possible universes, our universe was simulated, is evidence that we must have more than the usual infinitesimal probability of being simulated (even if the reason is inaccessible to us). In both cases, all of these minds must be realistically ‘confused’ or the point of running a realistic simulation is defeated.
Thus on average, for a given apparent location in the universe, the majority of minds thinking they are in that location are correct. (I guess at at least a thousand to one.)
I don’t see why this matters. We obviously already observe that we are not in a universe optimized for using ‘non-confused’ minds. (Look at non-confused minds like LLMs. Forget running any kind of exorbitantly expensive hyper-realistic universe-scale simulation to better ‘confuse’ them—we don’t even bother to give them any kind of grounding or telling them ‘where’ they are. We run them in an ultra-efficient manner, stripping out everything possible, down to the neural level. We only begrudgingly tell them ‘when’ they are in the prompt because that’s useful conditioning for answers.) The number of ‘non-confused’ minds running in unrealistic ancestor-like universes is irrelevant, and the simulation argument is not about them. This seems like you’re inverting a conditional or something in a confusing way?
But if there are finite resources and astronomically many extremely cheap things, only a few will be done.
Since there’s only one us, as far as we know, only a few are necessary to create the severe indexical uncertainty of the simulation argument.
If we are the ancestors who give rise to the simulators, then we will be of extreme interest to simulate, based on our own activities, which spend an enormous amount of effort modeling data collected in the past (ie. simulations), and so there will be a lot of simulations. And if we are not those ancestors (or already a simulation thereof), but some totally unconnected hypothetical universe (eg. some experiment in exploring carbon-based evolution by a civilization which actually evolved as sulfuric-acid silicon lifeforms and are curious about the theoretical advantages of carbon/water as a substrate), then the very fact that, out of the infinitely large number of possible universes, our universe was simulated, is evidence that we must have more than the usual infinitesimal probability of being simulated (even if the reason is inaccessible to us). In both cases, all of these minds must be realistically ‘confused’ or the point of running a realistic simulation is defeated.
I don’t see why this matters. We obviously already observe that we are not in a universe optimized for using ‘non-confused’ minds. (Look at non-confused minds like LLMs. Forget running any kind of exorbitantly expensive hyper-realistic universe-scale simulation to better ‘confuse’ them—we don’t even bother to give them any kind of grounding or telling them ‘where’ they are. We run them in an ultra-efficient manner, stripping out everything possible, down to the neural level. We only begrudgingly tell them ‘when’ they are in the prompt because that’s useful conditioning for answers.) The number of ‘non-confused’ minds running in unrealistic ancestor-like universes is irrelevant, and the simulation argument is not about them. This seems like you’re inverting a conditional or something in a confusing way?
Since there’s only one us, as far as we know, only a few are necessary to create the severe indexical uncertainty of the simulation argument.