He probably means that in a Big World of certain kinds, he thinks, TDT/UDT leads to unpopular conclusions. E.g. that we should believe in all deities who punish disbelief if they exist in some possible world.
This seems close to the reason I rejected the mathematical macrocosm hypothesis, even before Someone Who’s Probably Not Will Newsome explained part of his position. If Tegmark IV either does not constrain anticipation, or calls you a Boltzmann Brain equivalent, then it fails as an explanation. And upon close inspection, I don’t think I reject Boltzmann just by applying decision theory. It seems logically absurd to say, “I am likely a Boltzmann Brain, but acting like a real boy has more expected value.” The first clause means I shouldn’t trust my reasoning and should likely just think happy thoughts. I think the best theory of reality will say a random (real) mind would likely benefit from rationality, to at least the extent that we appear to benefit.
He probably means that in a Big World of certain kinds, he thinks, TDT/UDT leads to unpopular conclusions. E.g. that we should believe in all deities who punish disbelief if they exist in some possible world.
This seems close to the reason I rejected the mathematical macrocosm hypothesis, even before Someone Who’s Probably Not Will Newsome explained part of his position. If Tegmark IV either does not constrain anticipation, or calls you a Boltzmann Brain equivalent, then it fails as an explanation. And upon close inspection, I don’t think I reject Boltzmann just by applying decision theory. It seems logically absurd to say, “I am likely a Boltzmann Brain, but acting like a real boy has more expected value.” The first clause means I shouldn’t trust my reasoning and should likely just think happy thoughts. I think the best theory of reality will say a random (real) mind would likely benefit from rationality, to at least the extent that we appear to benefit.