But, “guys, this is very dangerous, we are proceeding very carefully before summoning something smarter than us, while trying out best to all reap the benefits of it” seems like a way easier narrative to get everyone bought into than “guys this is dangerous enough to warrant massive GPU monitoring but… still trying to push ahead as fast as we can?”.
Wouldn’t the narrative for Plan A be more like “we should be cautious and slow down if we aren’t confident about safety, and we’ll need to build the ability to slow down a lot”? While the narrative for “shut it all down” would have to involve something like “proceeding with any further development is too risky given the current situation”.
I’m not 100% sure what Nate/Eliezer believe. I know they do think eventually we should build superintelligence, and that it’d be an existential catastrophe if we didn’t.
I think they think (and, I agree) that we should be at least prepared for things that are more like 20-50 year pauses, if it turns out to take that long, but (at least speaking for myself), this isn’t because it’s intrinsically desireable to pause for 50 years. It’s because you should remain shut-down until you’re actually confidently know what you’re doing, with no pressure to convince yourself/each-other than you’re ready when you are not.
It might be that AI-accelerated alignment researchmeans you don’t need a 20-50 year pause, but, that should be a decision the governing body makes based on how things are playing out, not baked into the initial assumption, so we don’t need to take risks like “run tons of very smart AIs in parallel very fast” when we’re only somewhat confident about their longterm alignment which opens us up to more gradual disempowerment / slowly-outmanuevered risk, or eventual death by evolution.
I haven’t read the entirety of the IABIED website proposed treaty draft yet, but it includes this line, which includes flavor of “re-evaluate how things are going.”
Three years after the entry into force of this Treaty, a Conference of the Parties shall be held in Geneva, Switzerland, to review the operation of this Treaty with a view to assuring that the purposes of the Preamble and the provisions of the Treaty are being realized. At intervals of three years thereafter, Parties to the Treaty will convene further conferences with the same objective of reviewing the operation of the Treaty.
Sure, I agree that Nate/Eliezer think we should eventually build superintelligence and don’t want to causal a pause that lasts forever. In the comment you’re responding to, I’m just talking about difficulty in getting people to buy the narrative.
More generally, what Nate/Eliezer think is best is doesn’t resolve concerns with the pause going poorly because something else happens in practice. This includes the pause going on too long or leading to a general anti-AI/anti-digital-minds/anti-progress view which is costly for the longer run future.) (This applies to the proposed Plan A as well, but I think poor implementation is less scary in various ways and the particular risk of ~anti-progress forever is less strong.)
Wouldn’t the narrative for Plan A be more like “we should be cautious and slow down if we aren’t confident about safety, and we’ll need to build the ability to slow down a lot”? While the narrative for “shut it all down” would have to involve something like “proceeding with any further development is too risky given the current situation”.
I’m not 100% sure what Nate/Eliezer believe. I know they do think eventually we should build superintelligence, and that it’d be an existential catastrophe if we didn’t.
I think they think (and, I agree) that we should be at least prepared for things that are more like 20-50 year pauses, if it turns out to take that long, but (at least speaking for myself), this isn’t because it’s intrinsically desireable to pause for 50 years. It’s because you should remain shut-down until you’re actually confidently know what you’re doing, with no pressure to convince yourself/each-other than you’re ready when you are not.
It might be that AI-accelerated alignment researchmeans you don’t need a 20-50 year pause, but, that should be a decision the governing body makes based on how things are playing out, not baked into the initial assumption, so we don’t need to take risks like “run tons of very smart AIs in parallel very fast” when we’re only somewhat confident about their longterm alignment which opens us up to more gradual disempowerment / slowly-outmanuevered risk, or eventual death by evolution.
I haven’t read the entirety of the IABIED website proposed treaty draft yet, but it includes this line, which includes flavor of “re-evaluate how things are going.”
Sure, I agree that Nate/Eliezer think we should eventually build superintelligence and don’t want to causal a pause that lasts forever. In the comment you’re responding to, I’m just talking about difficulty in getting people to buy the narrative.
More generally, what Nate/Eliezer think is best is doesn’t resolve concerns with the pause going poorly because something else happens in practice. This includes the pause going on too long or leading to a general anti-AI/anti-digital-minds/anti-progress view which is costly for the longer run future.) (This applies to the proposed Plan A as well, but I think poor implementation is less scary in various ways and the particular risk of ~anti-progress forever is less strong.)