Few people have done this sort of thinking here, because this community mostly worries about risks from general, agentic AI and not narrow AI. We worry about systems that will decide to eliminate us, and have the superior intelligence to do so. Survivor scenarios are much more likely from risks of narrow AI accidentally causing major disaster. And that’s mostly not what this community worries about.
Few people have done this sort of thinking here, because this community mostly worries about risks from general, agentic AI and not narrow AI. We worry about systems that will decide to eliminate us, and have the superior intelligence to do so. Survivor scenarios are much more likely from risks of narrow AI accidentally causing major disaster. And that’s mostly not what this community worries about.