I think there is some way that the conversation needs to advance, and I think this is roughly carving at some real joints and it’s important that people are tracking the distinction.
But
a) I’m generally worried about reifying the groups more into existence (as opposed to trying to steer towards a world where people can have more nuanced views). This is tricky, there are tradeoffs and I’m not sure how to handle this. But...
b) this post title and framing particular is super leaning into the polarization and I wish it did something different.
(like, even specifically resolving the lack-of-nuance this post complains about, requires distinguishing between “never build ASI” and “don’t build ASI until it can be done safely”, which isn’t covered in the Two Sides)
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
The main split is about whether racing in the current regime is desirable, so both “never build ASI” and “don’t build ASI until it can be done safely” fall within the scope of camp B. Call these two subcamps B1 and B2. I think B1 and B2 give the same prescriptions within the actionable timeframe.
don’t build ASI until it can be done safely > build ASI whenever but try to make it safe > never build ASI
Those people might give different prescriptions to the “never build ASI” people, like not endorsing actions that would tank the probability of ASI ever getting built. (Although in practice I think they probably mostly make the same prescriptions at the moment.)
I agree that some people have this preference ordering, but I don’t know of any difference in specific actionable recommendations that would be given by “don’t until safely” and “don’t ever” camps.
In practice, bans can be lifted, so “never” is never going to become an unassailable law of the universe. And right now, it seems misguided to quibble over “Pause for 5, 10, 20 years”, and “Stop for good”, given the urgency of the extinction threat we are currently facing. If we’re going to survive the next decade with any degree of certainty, we need an alliance between B1 and B2, and I’m happy for one to exist.
On this point specifically, those two groups are currently allied, though they don’t always recognize it. If sufficiently-safe alignment is found to be impossible or humanity decides to never build ASI, there would stop being any difference between the two groups.
This is well-encapsulated by the differences between Stop AI and PauseAI. At least from PauseAI’s perspective, both orgs are currently on exactly the same team.
Yes, my comment was meant to address the “never build ASI” and “don’t build ASI until it can be done safely” distinction, which Raemon was pointing out does not map onto Camp A and Camp B. All of ControlAI, PauseAI, and Stop AI are firmly in Camp B, but have different opinions about what to do once a moratorium is achieved.
One thing I meant to point toward was that unless we first coordinate to get that moratorium, the rest is a moot point.
I do not yet know of anyone in the “never build ASI” camp and would be interested in reading or listening to an extended elaboration of such a position.
I don’t like polarization as such, but I also don’t like all of my loved ones being killed. I see this post and the open statement as dissolving a conflationary alliance that groups people who want to (at least temporarily) prevent the creation of superintelligence with people who don’t want to do that. Those two groups of people are trying to do very different things that I expect will have very different outcomes.
I don’t think the people in Camp A are immoral people just for holding that position[1], but I do think it is necessary to communicate: “If we do thing A, we will die. You must stop trying to do thing A, because that will kill everyone. Thing B will not kill everyone. These are not the same thing.”
In general, to actually get the things that you want in the world, sometimes you have to fight very hard for them, even against other people. Sometimes you have to optimize for convincing people. Sometimes you have to shame people. The norms of discourse that are comfortable for me and elevate truth-seeking and that make LessWrong a wonderful place are not always the same patterns as those that are most likely to cause us and our families to still be alive in the near future.
Though I have encountered some people in the AI Safety community who are happy to unnecessarily subject others to extreme risks without their consent after a naive utilitarian calculus on their behalf, which I do consider grossly immoral.
I think there is some way that the conversation needs to advance, and I think this is roughly carving at some real joints and it’s important that people are tracking the distinction.
But
a) I’m generally worried about reifying the groups more into existence (as opposed to trying to steer towards a world where people can have more nuanced views). This is tricky, there are tradeoffs and I’m not sure how to handle this. But...
b) this post title and framing particular is super leaning into the polarization and I wish it did something different.
(like, even specifically resolving the lack-of-nuance this post complains about, requires distinguishing between “never build ASI” and “don’t build ASI until it can be done safely”, which isn’t covered in the Two Sides)
The main split is about whether racing in the current regime is desirable, so both “never build ASI” and “don’t build ASI until it can be done safely” fall within the scope of camp B. Call these two subcamps B1 and B2. I think B1 and B2 give the same prescriptions within the actionable timeframe.
Some people likely think
Those people might give different prescriptions to the “never build ASI” people, like not endorsing actions that would tank the probability of ASI ever getting built. (Although in practice I think they probably mostly make the same prescriptions at the moment.)
I agree that some people have this preference ordering, but I don’t know of any difference in specific actionable recommendations that would be given by “don’t until safely” and “don’t ever” camps.
In practice, bans can be lifted, so “never” is never going to become an unassailable law of the universe. And right now, it seems misguided to quibble over “Pause for 5, 10, 20 years”, and “Stop for good”, given the urgency of the extinction threat we are currently facing. If we’re going to survive the next decade with any degree of certainty, we need an alliance between B1 and B2, and I’m happy for one to exist.
On this point specifically, those two groups are currently allied, though they don’t always recognize it. If sufficiently-safe alignment is found to be impossible or humanity decides to never build ASI, there would stop being any difference between the two groups.
This is well-encapsulated by the differences between Stop AI and PauseAI. At least from PauseAI’s perspective, both orgs are currently on exactly the same team.
Pause AI is clearly a central member of Camp B? And Holly signed the superintelligence petition.
Yes, my comment was meant to address the “never build ASI” and “don’t build ASI until it can be done safely” distinction, which Raemon was pointing out does not map onto Camp A and Camp B. All of ControlAI, PauseAI, and Stop AI are firmly in Camp B, but have different opinions about what to do once a moratorium is achieved.
One thing I meant to point toward was that unless we first coordinate to get that moratorium, the rest is a moot point.
I do not yet know of anyone in the “never build ASI” camp and would be interested in reading or listening to an extended elaboration of such a position.
I don’t like polarization as such, but I also don’t like all of my loved ones being killed. I see this post and the open statement as dissolving a conflationary alliance that groups people who want to (at least temporarily) prevent the creation of superintelligence with people who don’t want to do that. Those two groups of people are trying to do very different things that I expect will have very different outcomes.
I don’t think the people in Camp A are immoral people just for holding that position[1], but I do think it is necessary to communicate: “If we do thing A, we will die. You must stop trying to do thing A, because that will kill everyone. Thing B will not kill everyone. These are not the same thing.”
In general, to actually get the things that you want in the world, sometimes you have to fight very hard for them, even against other people. Sometimes you have to optimize for convincing people. Sometimes you have to shame people. The norms of discourse that are comfortable for me and elevate truth-seeking and that make LessWrong a wonderful place are not always the same patterns as those that are most likely to cause us and our families to still be alive in the near future.
Though I have encountered some people in the AI Safety community who are happy to unnecessarily subject others to extreme risks without their consent after a naive utilitarian calculus on their behalf, which I do consider grossly immoral.