Yeah (for humans) it’s the difference between knowing you’re in a simulation with high confidence based on looking at the world and unbiased first-order Bayesian reasoning, and knowing you’re in a simulation with high confidence because you (or your ancestors) keep getting rewarded for thinking you’re in a simulation and trying to make inferences accordingly.
Interesting: Until now I assumed that the “smart reason” was identical to 2, but clearly they are different.
Yeah (for humans) it’s the difference between knowing you’re in a simulation with high confidence based on looking at the world and unbiased first-order Bayesian reasoning, and knowing you’re in a simulation with high confidence because you (or your ancestors) keep getting rewarded for thinking you’re in a simulation and trying to make inferences accordingly.