The general problem with Bostrom’s argument is that it tries to apply incorrect probabilstic model. It implicitly assumes independence where there is causal connection, therefore arriving to a wrong conclusion. Similarly to conventional reasoning in Doomsday Argument or Sleeping Beauty problems.
For future humans, say in year 3000, to create simulations of year 2025, first actual year 2025 has to happen in the base reality. And then all the next years up to 3000. We know about it very well. Not a single simulation can happen unless an actual reality happens first.
And yet Bostroms models our knowledge about this setting as if we participate in a probability experiment with random sample between many “simulation” outcomes and one “reality” outcome. The inadequacy of such modelling should be obvious. Consider:
There is a bag with a thousand balls. One red and 999 blue. First a red ball is picked from the bag. Then all the blue balls are picked one by one.
and compare it to
There is a bag with a thousand balls. One red and 999 blue. For a thousand iterations a random ball is picked from the bag.
Clearly, the second procedure is very different from the first. The mathematical model that describes it doesn’t describe the first at all for exactly the same reasons why Bostrom’s model doesn’t describe our knowledge state.
The general problem with Bostrom’s argument is that it tries to apply incorrect probabilstic model. It implicitly assumes independence where there is causal connection, therefore arriving to a wrong conclusion. Similarly to conventional reasoning in Doomsday Argument or Sleeping Beauty problems.
For future humans, say in year 3000, to create simulations of year 2025, first actual year 2025 has to happen in the base reality. And then all the next years up to 3000. We know about it very well. Not a single simulation can happen unless an actual reality happens first.
And yet Bostroms models our knowledge about this setting as if we participate in a probability experiment with random sample between many “simulation” outcomes and one “reality” outcome. The inadequacy of such modelling should be obvious. Consider:
and compare it to
Clearly, the second procedure is very different from the first. The mathematical model that describes it doesn’t describe the first at all for exactly the same reasons why Bostrom’s model doesn’t describe our knowledge state.