I don’t think the intellectual work of finding a concrete way that’s likely makes humanity survive an AGI going foom is currently done. If there would be a concrete way, the problem would be a lot less problematic.
Hopefully, places like MIRI and FHI will do that work in the future. So I would expect people to take it seriously to support organizations like MIRI and FHI over OpenAI which pushes for capability increases.
Who specifically do you think should act differently, and in what concrete way because they are more aware of the Beyond the Reach of God narrative?
I don’t think the intellectual work of finding a concrete way that’s likely makes humanity survive an AGI going foom is currently done. If there would be a concrete way, the problem would be a lot less problematic.
Hopefully, places like MIRI and FHI will do that work in the future. So I would expect people to take it seriously to support organizations like MIRI and FHI over OpenAI which pushes for capability increases.