I sadly don’t have time to really introspect what is going in me here, but something about this comment feels pretty off to me. I think in some sense it provides an important counterpoint to the OP, but also, I feel like it also stretches the truth quite a bit:
Toby Ord primarily works on influencing public opinion and governments, and very much seems to view the world through a “raising the sanity waterline” lense. Indeed, I just talked to him last morning where I tried to convince him that misuse risk from AI, and the risk from having the “wrong actor” get the AI is much less than he thinks it is, which feels like a very related topic.
Eliezer has done most of his writing on the meta-level, on the art of rationality, on the art of being a good and moral person, and on how to think about your own identity.
Sam Bankman-Fried is also very active in political activism, and (my guess) is quite concerned about the information landscape. I expect he would hate the terms used in this post, but I expect there to be a bunch of similarities in his model of the world and the one outlined in this post, in terms of trying to raise the sanity waterline and improve the world’s decision-making in a much broader sense (there is a reason why he was one of the biggest contributors to the Clinton and Biden campaigns).
I think it is true that the other three are mostly focusing on object-level questions.
I… also dislike something about the meta-level of arguing from high-status individuals. I expect it to make the discussion worse, and also make it harder for people to respond with counter arguments, because counter arguments arguments could be read as attacking the high-status people, which is scary.
I dislike the language used in the OP, and sure feel like it actively steers attention in unproductive ways that make me not want to engage with it, but I do have a strong sense that it’s going to be very hard to actually make progress on building a healthy field of AI Alignment, because the world will repeatedly try to derail the field into being about defeating the other monkeys, or being another story about why you should work at the big AI companies, or why you should give person X or movement Y all of your money, which feels to me related to what the OP is talking about.
Hmm, I feel like the revision would have to be in Scott’s comment. I was just responding to the names that Scott mentioned, and I think everything I am saying here is still accurate.
I sadly don’t have time to really introspect what is going in me here, but something about this comment feels pretty off to me. I think in some sense it provides an important counterpoint to the OP, but also, I feel like it also stretches the truth quite a bit:
Toby Ord primarily works on influencing public opinion and governments, and very much seems to view the world through a “raising the sanity waterline” lense. Indeed, I just talked to him last morning where I tried to convince him that misuse risk from AI, and the risk from having the “wrong actor” get the AI is much less than he thinks it is, which feels like a very related topic.
Eliezer has done most of his writing on the meta-level, on the art of rationality, on the art of being a good and moral person, and on how to think about your own identity.
Sam Bankman-Fried is also very active in political activism, and (my guess) is quite concerned about the information landscape. I expect he would hate the terms used in this post, but I expect there to be a bunch of similarities in his model of the world and the one outlined in this post, in terms of trying to raise the sanity waterline and improve the world’s decision-making in a much broader sense (there is a reason why he was one of the biggest contributors to the Clinton and Biden campaigns).
I think it is true that the other three are mostly focusing on object-level questions.
I… also dislike something about the meta-level of arguing from high-status individuals. I expect it to make the discussion worse, and also make it harder for people to respond with counter arguments, because counter arguments arguments could be read as attacking the high-status people, which is scary.
I dislike the language used in the OP, and sure feel like it actively steers attention in unproductive ways that make me not want to engage with it, but I do have a strong sense that it’s going to be very hard to actually make progress on building a healthy field of AI Alignment, because the world will repeatedly try to derail the field into being about defeating the other monkeys, or being another story about why you should work at the big AI companies, or why you should give person X or movement Y all of your money, which feels to me related to what the OP is talking about.
The Sam Bankman Fried reads differently now his massive fraud with FTX is public, might be worth a comment/revision?
I can’t help but see Sam disagreeing with a message as a positive for the message (I know it’s a fallacy, but the feelings still there)
Hmm, I feel like the revision would have to be in Scott’s comment. I was just responding to the names that Scott mentioned, and I think everything I am saying here is still accurate.