Not obviously true. An alternative which immediately comes to my mind is a globally enforced mutual agreement to refrain from building superintelligences.
(Yes, that alternative is unrealistic if making superintelligences turns out to be too easy. But I’d want to see that premise argued for, not passed over in silence.)
The more general problem is that we need a solution to multi-polar traps (of which superintelligent AI creationg is one instance). The only viable solution I’ve seen proposed is creating a sufficiently powerful Singleton.
The only likely viable ideas for Singletons I’ve seen proposed are superintelligent AIs, and a human group with extensive use of thought-control technologies on itself. The latter probably can’t work unless you apply it to all of society, since it doesn’t have the same inherent advantages AI does, and as such would remain vulnerable to being usurped by a clandestingly constructed AI. Applying the latter to all of society, OTOH, would most likely cause massive value loss.
Therefore I’m in favor of the former; not because I like the odds, but because the alternatives look worse.
Because if you don’t, someone else will.
Not obviously true. An alternative which immediately comes to my mind is a globally enforced mutual agreement to refrain from building superintelligences.
(Yes, that alternative is unrealistic if making superintelligences turns out to be too easy. But I’d want to see that premise argued for, not passed over in silence.)
The more general problem is that we need a solution to multi-polar traps (of which superintelligent AI creationg is one instance). The only viable solution I’ve seen proposed is creating a sufficiently powerful Singleton.
The only likely viable ideas for Singletons I’ve seen proposed are superintelligent AIs, and a human group with extensive use of thought-control technologies on itself. The latter probably can’t work unless you apply it to all of society, since it doesn’t have the same inherent advantages AI does, and as such would remain vulnerable to being usurped by a clandestingly constructed AI. Applying the latter to all of society, OTOH, would most likely cause massive value loss.
Therefore I’m in favor of the former; not because I like the odds, but because the alternatives look worse.