As promised, here’s a plausible sketch for how Truth might fall:
Modal realism is true, so you lose all objective contingent truths (e.g. the sky is blue). You only have the truths about:
the space of all possible worlds (e.g. there is some world where the sky is blue, if the sky is possibly a colour, and the sea is possibly a colour, then it’s possible that the sky is the first colour and the sea is the second colour.)
subjective truths about which world you occupy (e.g. the sky in my world is blue). This is a bit more impoverished but probably fine.
However, you might lose the subjective contingent truths also, because the “I/me/my” might pick out multiple entities in different possible worlds, e.g. some entities in base reality, some in a simulated reality, etc. That kinda sucks, because then there is not even a subjective truth about whether you’re in a simulation. Note this is worse than the sceptical “you don’t know if you’re in a simulation” — instead the worry is that there’s no truth to know!
Possibly you can save this with UDASSA-ish, epistemology is caring. But then we’re at square zero. Your “caring” might not be coherent even under idealisation.
Okay so we’re left with what? Just the facts about different Solomonoff priors? That’s kinda impoverished. And we might even lose those facts due to logically impossible worlds, or non-standard arithmetic or set-theoretic pluralism.
As promised, here’s a plausible sketch for how Truth might fall:
Modal realism is true, so you lose all objective contingent truths (e.g. the sky is blue). You only have the truths about:
the space of all possible worlds (e.g. there is some world where the sky is blue, if the sky is possibly a colour, and the sea is possibly a colour, then it’s possible that the sky is the first colour and the sea is the second colour.)
subjective truths about which world you occupy (e.g. the sky in my world is blue). This is a bit more impoverished but probably fine.
However, you might lose the subjective contingent truths also, because the “I/me/my” might pick out multiple entities in different possible worlds, e.g. some entities in base reality, some in a simulated reality, etc. That kinda sucks, because then there is not even a subjective truth about whether you’re in a simulation. Note this is worse than the sceptical “you don’t know if you’re in a simulation” — instead the worry is that there’s no truth to know!
Possibly you can save this with UDASSA-ish, epistemology is caring. But then we’re at square zero. Your “caring” might not be coherent even under idealisation.
Okay so we’re left with what? Just the facts about different Solomonoff priors? That’s kinda impoverished. And we might even lose those facts due to logically impossible worlds, or non-standard arithmetic or set-theoretic pluralism.