None of this is particularly new; it feels to me like repeating obvious claims that have regularly been made [. . .] But I’ve been repeating them aloud a bunch recently
I think it’s Good and Valuable to keep simplicity-iterating on fundamental points, such as this one, which nevertheless seem to be sticking points for people who are potential converts.
Asking people to Read the Sequences, with the goal of turning them into AI-doesn’t-kill-us-all helpers, is not Winning given the apparent timescales.
Ryan is saying “AI takeover is obviously really bad and scary regardless of whether the AI is likely to literally kill everybody. I don’t see why someone’s sticking point for worrying about AI alignment would be the question of whether misaligned AIs would literally kill everyone after taking over.”
I probably should have specified that my “potential converts” audience was “people who heard that Elon Musk was talking about AI risk something something, what’s that?”, and don’t know more than five percent of the information that is common knowledge among active LessWrong participants.
I think it’s Good and Valuable to keep simplicity-iterating on fundamental points, such as this one, which nevertheless seem to be sticking points for people who are potential converts.
Asking people to Read the Sequences, with the goal of turning them into AI-doesn’t-kill-us-all helpers, is not Winning given the apparent timescales.
I really hope this isn’t a sticking point for people. I also strongly disagree with this being ‘a fundamental point’.
wait which thing are you hoping isn’t the sticking point?
Ryan is saying “AI takeover is obviously really bad and scary regardless of whether the AI is likely to literally kill everybody. I don’t see why someone’s sticking point for worrying about AI alignment would be the question of whether misaligned AIs would literally kill everyone after taking over.”
[endorsed]
I probably should have specified that my “potential converts” audience was “people who heard that Elon Musk was talking about AI risk something something, what’s that?”, and don’t know more than five percent of the information that is common knowledge among active LessWrong participants.