A couple more thoughts on this post which I’ve spent a lot of today thinking about and discussing with folks:
This post was good for generating a lot of discussion and engagement on the topic, but it’d be great to have some more careful, thorough systematic analysis of the arguments and implications presented. This post seems to be arguing for short timelines and at least a medium-fast takeoff (which I tend to agree with), but then it argues for mass advocacy as a result.
This is the opposite kind of intervention that makes sense to me and that Holden argues for in this kind of takeoff scenario: ‘Faster and less multipolar takeoff dynamics tend to imply that we should focus on very “direct” interventions aimed at helping transformative AI go well: working on the alignment problem in advance, caring a lot about the cultures and practices of AI labs and governments that might lead the way on transformative AI, etc.’ This is from his Important, actionable questions for the most important century doc.
A more careful and complete analysis should be framed in answer to the ‘Questions about AI “takeoff dynamics”’ from that doc by someone who can commit the time and thought to it.
While we should look for strong evidence and strong arguments about timelines and takeoff, we shouldn’t be surprised not to be able to arrive at consensus about it. Given that this post is about pulling the “fire alarm” I’m kind of surprised no one here has linked yet to MIRI’s very aptly titled There’s No Fire Alarm for Artificial General Intelligence:
”When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door. What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable. [...] There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.”
A couple more thoughts on this post which I’ve spent a lot of today thinking about and discussing with folks:
This post was good for generating a lot of discussion and engagement on the topic, but it’d be great to have some more careful, thorough systematic analysis of the arguments and implications presented. This post seems to be arguing for short timelines and at least a medium-fast takeoff (which I tend to agree with), but then it argues for mass advocacy as a result.
This is the opposite kind of intervention that makes sense to me and that Holden argues for in this kind of takeoff scenario: ‘Faster and less multipolar takeoff dynamics tend to imply that we should focus on very “direct” interventions aimed at helping transformative AI go well: working on the alignment problem in advance, caring a lot about the cultures and practices of AI labs and governments that might lead the way on transformative AI, etc.’ This is from his Important, actionable questions for the most important century doc.
A more careful and complete analysis should be framed in answer to the ‘Questions about AI “takeoff dynamics”’ from that doc by someone who can commit the time and thought to it.
While we should look for strong evidence and strong arguments about timelines and takeoff, we shouldn’t be surprised not to be able to arrive at consensus about it. Given that this post is about pulling the “fire alarm” I’m kind of surprised no one here has linked yet to MIRI’s very aptly titled There’s No Fire Alarm for Artificial General Intelligence:
”When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door. What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable. [...] There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.”
Reflecting on this and other comments, I decided to edit the original post to retract the call for a “fire alarm”.