Yeah (for humans) it’s the difference between knowing you’re in a simulation with high confidence based on looking at the world and unbiased first-order Bayesian reasoning, and knowing you’re in a simulation with high confidence because you (or your ancestors) keep getting rewarded for thinking you’re in a simulation and trying to make inferences accordingly.
Yeah (for humans) it’s the difference between knowing you’re in a simulation with high confidence based on looking at the world and unbiased first-order Bayesian reasoning, and knowing you’re in a simulation with high confidence because you (or your ancestors) keep getting rewarded for thinking you’re in a simulation and trying to make inferences accordingly.