Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’?
This seems like a pretty leading statement, since it (a) pre-supposes that an intelligence explosion will happen, and (b) puts someone up against Turing and Hawking if they disagree about the likely x-risk factor.
Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing’s time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity’s survival.
This seems like a pretty leading statement, since it (a) pre-supposes that an intelligence explosion will happen, and (b) puts someone up against Turing and Hawking if they disagree about the likely x-risk factor.
Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing’s time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity’s survival.
It doesn’t quite (a), although there is ambiguity there that could be removed if desired. (It obviously does (b)).