Hypothetically suppose the following (throughout, assume “AI” stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100. 2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us. 3) if we develop FAI before UFAI and before 2100, FAI saves us. 4) FAI isn’t particularly harder to build than UFAI is.
Given those premises, it’s true that UFAI isn’t a major existential risk, in that even if we do nothing about it, UFAI won’t kill us. But it’s also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I’m not seeing?
I don’t. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don’t get FAI to solve them soon.
How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?
Hypothetically suppose the following (throughout, assume “AI” stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn’t particularly harder to build than UFAI is.
Given those premises, it’s true that UFAI isn’t a major existential risk, in that even if we do nothing about it, UFAI won’t kill us. But it’s also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I’m not seeing?
No, you’re right. thomblake makes the same point. I just wasn’t thinking carefully enough. Thanks!
I don’t. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don’t get FAI to solve them soon.