It actually not clear what EY means by “anthropic immortality”.
Same, I’m guessing that by “It actually doesn’t depend on quantum mechanics either, a large classical universe gives you the same result”, EY means that QI is just one way Anthropic Immortality could be true, but “Anthropic immortality is a whole different dubious kettle of worms” seems to contradict this reading.
(Maybe it’s ‘dubious’ because it does not have the intrinsic ‘continuity’ of QI? e.g. you could ‘anthropically survive’ in a complete different part of the universe with a copy of you; but I doubt that would seem dubious to EY?)
Future anthropic shadow. I am more likely to be in the world in which alignment is easy
I think anthropic shadow lets you say conditional on survival, “(example) a nuclear war or other collapse will have happened”[1], but not that alignment was easy, because alignment being easy would be a logical fact, not a historical contingency; if it’s true, it wouldn’t be for anthropic reasons. (Although, stumbling upon paradigms in which it is easy would be a historical contingency)
“while civilization was recovering, some mathematicians kept working on alignment theory that did not need computers so that by the time humans could create AIs again, they had alignment solutions to present”
Yes, there are two forms of future anthropic shadow, the same way as for Presumptuous Philosopher: 1. Strong form—alignment is easy in theoretical ground. 2. Weak form—I more likely be in the world where some collapse (Taiwan war) will prevent dangerous AI. And I can see signs of such impending war now.
Same, I’m guessing that by “It actually doesn’t depend on quantum mechanics either, a large classical universe gives you the same result”, EY means that QI is just one way Anthropic Immortality could be true, but “Anthropic immortality is a whole different dubious kettle of worms” seems to contradict this reading.
(Maybe it’s ‘dubious’ because it does not have the intrinsic ‘continuity’ of QI? e.g. you could ‘anthropically survive’ in a complete different part of the universe with a copy of you; but I doubt that would seem dubious to EY?)
I think anthropic shadow lets you say conditional on survival, “(example) a nuclear war or other collapse will have happened”[1], but not that alignment was easy, because alignment being easy would be a logical fact, not a historical contingency; if it’s true, it wouldn’t be for anthropic reasons. (Although, stumbling upon paradigms in which it is easy would be a historical contingency)
“while civilization was recovering, some mathematicians kept working on alignment theory that did not need computers so that by the time humans could create AIs again, they had alignment solutions to present”
Yes, there are two forms of future anthropic shadow, the same way as for Presumptuous Philosopher:
1. Strong form—alignment is easy in theoretical ground.
2. Weak form—I more likely be in the world where some collapse (Taiwan war) will prevent dangerous AI. And I can see signs of such impending war now.
Do you think we should be moving to New Zealand (ChatGPT’s suggestion) or something in case of global nuclear war?
New Zealand is a good place, but everyone can’t move there or guess correctly right moment to do it.
I think we can conceivably gather data on the combination of “anthropic shadow is real & alignment is hard”.
Predictions would be:
we will survive this
conditional on us finding alien civilizations that reached the same technological level, most of them will have been wiped by AI.
2. is my guess as to why there is a Great Filter. More so than Grabby Aliens.