Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing’s time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity’s survival.
Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing’s time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity’s survival.