As such the problems we’re likely to have with AI are less ‘Terminator’ and more ‘Sorcerer’s apprentice’
This is true and important and a lot of the other experts don’t get it. Unfortunately, Uther seems to think that SIAI/LW/Xixidu doesn’t get it either, and
These types of problems are less worrying as, in general, the AI isn’t trying to actively hurt humans.
Shows that he hasn’t thought about all the ways that “sorcerer’s apprentice” AIs could go horribly wrong.
Yeah, I agree that Xixidu’s mails could make it clearer that he’s aware (or that LessWrong is aware) that “Sorcerer’s Apprentice” is a better analogy than “Terminator”, to get slightly responses that aren’t “Terminator is fiction, silly!”.
This is true and important and a lot of the other experts don’t get it. Unfortunately, Uther seems to think that SIAI/LW/Xixidu doesn’t get it either, and
Shows that he hasn’t thought about all the ways that “sorcerer’s apprentice” AIs could go horribly wrong.
Yeah, I agree that Xixidu’s mails could make it clearer that he’s aware (or that LessWrong is aware) that “Sorcerer’s Apprentice” is a better analogy than “Terminator”, to get slightly responses that aren’t “Terminator is fiction, silly!”.