Okay, but the reason you think AI safety/x-risk is important is because twenty years ago, people like Eliezer Yudkowsky and Nick Bostrom were trying to do systematically correct reasoning about the future, noticed that the alignment problem looked really important, and followed that line of reasoning where it took them—even though it probably looked “tainted” to the serious academics of the time. (The robot apocalypse is nigh? Pftt, sounds like science fiction.)
Yep, this is true.
(Well, I don’t think EY + NB just wanted true beliefs, I think they were also specifically asking themselves like “Which questions are important to ask?” and focused on things like nanotech and EMs and AGI, but I think your key point is that they also had to push against a lot of political tides in academia and other circles, and that cow-towing to such pressures universally will kill you, not literally but kinda.)
But, I will add that I think there’s a general variable we can track a bit, which is whether a topic is pulling along a major/common dimension in the conversational/political tug-o-war, or whether the idea is trying to pull the rope sideways. I tend to be more sympathetic to conversations that are pulling it sideways, and want to be more charitable and give space to such ideas, than to ideas that are being debated on every other space on the internet (example).
I feel some desire to say specifically to you Zack, that obviously I don’t think this means that you should allow people to all agree with one side of such a topic; you shouldn’t pick a side, but instead dissuade arguments for either side equally. Jim Babcock was the person who taught me the rule that if you get punished for saying X, then even if you believe not-X, it is generally unethical to argue for not-X, as the other side isn’t allowed to counter.
Yep, this is true.
(Well, I don’t think EY + NB just wanted true beliefs, I think they were also specifically asking themselves like “Which questions are important to ask?” and focused on things like nanotech and EMs and AGI, but I think your key point is that they also had to push against a lot of political tides in academia and other circles, and that cow-towing to such pressures universally will kill you, not literally but kinda.)
But, I will add that I think there’s a general variable we can track a bit, which is whether a topic is pulling along a major/common dimension in the conversational/political tug-o-war, or whether the idea is trying to pull the rope sideways. I tend to be more sympathetic to conversations that are pulling it sideways, and want to be more charitable and give space to such ideas, than to ideas that are being debated on every other space on the internet (example).
I feel some desire to say specifically to you Zack, that obviously I don’t think this means that you should allow people to all agree with one side of such a topic; you shouldn’t pick a side, but instead dissuade arguments for either side equally. Jim Babcock was the person who taught me the rule that if you get punished for saying X, then even if you believe not-X, it is generally unethical to argue for not-X, as the other side isn’t allowed to counter.