(like, even specifically resolving the lack-of-nuance this post complains about, requires distinguishing between “never build ASI” and “don’t build ASI until it can be done safely”, which isn’t covered in the Two Sides)
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
The main split is about whether racing in the current regime is desirable, so both “never build ASI” and “don’t build ASI until it can be done safely” fall within the scope of camp B. Call these two subcamps B1 and B2. I think B1 and B2 give the same prescriptions within the actionable timeframe.
don’t build ASI until it can be done safely > build ASI whenever but try to make it safe > never build ASI
Those people might give different prescriptions to the “never build ASI” people, like not endorsing actions that would tank the probability of ASI ever getting built. (Although in practice I think they probably mostly make the same prescriptions at the moment.)
I agree that some people have this preference ordering, but I don’t know of any difference in specific actionable recommendations that would be given by “don’t until safely” and “don’t ever” camps.
In practice, bans can be lifted, so “never” is never going to become an unassailable law of the universe. And right now, it seems misguided to quibble over “Pause for 5, 10, 20 years”, and “Stop for good”, given the urgency of the extinction threat we are currently facing. If we’re going to survive the next decade with any degree of certainty, we need an alliance between B1 and B2, and I’m happy for one to exist.
On this point specifically, those two groups are currently allied, though they don’t always recognize it. If sufficiently-safe alignment is found to be impossible or humanity decides to never build ASI, there would stop being any difference between the two groups.
This is well-encapsulated by the differences between Stop AI and PauseAI. At least from PauseAI’s perspective, both orgs are currently on exactly the same team.
Yes, my comment was meant to address the “never build ASI” and “don’t build ASI until it can be done safely” distinction, which Raemon was pointing out does not map onto Camp A and Camp B. All of ControlAI, PauseAI, and Stop AI are firmly in Camp B, but have different opinions about what to do once a moratorium is achieved.
One thing I meant to point toward was that unless we first coordinate to get that moratorium, the rest is a moot point.
I do not yet know of anyone in the “never build ASI” camp and would be interested in reading or listening to an extended elaboration of such a position.
(like, even specifically resolving the lack-of-nuance this post complains about, requires distinguishing between “never build ASI” and “don’t build ASI until it can be done safely”, which isn’t covered in the Two Sides)
The main split is about whether racing in the current regime is desirable, so both “never build ASI” and “don’t build ASI until it can be done safely” fall within the scope of camp B. Call these two subcamps B1 and B2. I think B1 and B2 give the same prescriptions within the actionable timeframe.
Some people likely think
Those people might give different prescriptions to the “never build ASI” people, like not endorsing actions that would tank the probability of ASI ever getting built. (Although in practice I think they probably mostly make the same prescriptions at the moment.)
I agree that some people have this preference ordering, but I don’t know of any difference in specific actionable recommendations that would be given by “don’t until safely” and “don’t ever” camps.
In practice, bans can be lifted, so “never” is never going to become an unassailable law of the universe. And right now, it seems misguided to quibble over “Pause for 5, 10, 20 years”, and “Stop for good”, given the urgency of the extinction threat we are currently facing. If we’re going to survive the next decade with any degree of certainty, we need an alliance between B1 and B2, and I’m happy for one to exist.
On this point specifically, those two groups are currently allied, though they don’t always recognize it. If sufficiently-safe alignment is found to be impossible or humanity decides to never build ASI, there would stop being any difference between the two groups.
This is well-encapsulated by the differences between Stop AI and PauseAI. At least from PauseAI’s perspective, both orgs are currently on exactly the same team.
Pause AI is clearly a central member of Camp B? And Holly signed the superintelligence petition.
Yes, my comment was meant to address the “never build ASI” and “don’t build ASI until it can be done safely” distinction, which Raemon was pointing out does not map onto Camp A and Camp B. All of ControlAI, PauseAI, and Stop AI are firmly in Camp B, but have different opinions about what to do once a moratorium is achieved.
One thing I meant to point toward was that unless we first coordinate to get that moratorium, the rest is a moot point.
I do not yet know of anyone in the “never build ASI” camp and would be interested in reading or listening to an extended elaboration of such a position.