Yes, there are two forms of future anthropic shadow, the same way as for Presumptuous Philosopher: 1. Strong form—alignment is easy in theoretical ground. 2. Weak form—I more likely be in the world where some collapse (Taiwan war) will prevent dangerous AI. And I can see signs of such impending war now.
Yes, there are two forms of future anthropic shadow, the same way as for Presumptuous Philosopher:
1. Strong form—alignment is easy in theoretical ground.
2. Weak form—I more likely be in the world where some collapse (Taiwan war) will prevent dangerous AI. And I can see signs of such impending war now.
Do you think we should be moving to New Zealand (ChatGPT’s suggestion) or something in case of global nuclear war?
New Zealand is a good place, but everyone can’t move there or guess correctly right moment to do it.