I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely.
To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam’s razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.
I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely.
To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam’s razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.