To echo @StanislavKrym it seems like there were a few assumptions that pretty easily explain the lack of diversity around this topic.
1. That compute power would not be a bottleneck. (Right now, the means of trying to stop AGI from being built is largely via compute power restriction; if it were not a significant bottleneck, then global agreements about this would be immensely more intrusive and dubious.)
2. That FOOM would be fast, and we would not have multi-year long period where people can actually talk to AI but we have no superintelligence; this kind of period is one of the reasons a “pause” is plausible. (In “There is No Fire Alarm” for instance Yud says that people will only think AGI is imminent when “the AI seeming pretty smart in interaction and conversation; aka the AI actually being an AGI already.” Really that whole article is about how people are going to just not realize things will happen till they happen; it is actually premised on a much faster FOOM.)
3. That there would be “infohazards” about AI, potentially related to AI creation, which making generally known would be enormously net negative. (I.e., MIRI actually stopped publishing because they thought they might have such infohazards. If you’re concerned about drawing attention to these things than an “International Pause” is again plausibly the worst thing you could do, because you’re drawing a bunch of attention to a small space.)
The third seems a bit fuzzier and worse as reason not consider a pause, although I wouldn’t be surprised if it was one such reason. The first two seem to me pretty compelling reasons this option would not be considered. In general, I expect all of these were operating not as explicit considerations but as general hidden steering vectors, keeping this option from being available.
Paul Christiano didn’t have these beliefs/assumptions (and was prominent on LW), and more and more people in AI safety had Paul-like views as DL became more popular/successful through the 2010s (including safety people at DeepMind, founded in 2010, and OpenAI, founded in 2015), but still no significant Pause position or advocacy appeared until 2022.
There is still no firealarm for ASI. How things will work out is still uncertain. The AI talking to you is more like smoke than a firealarm in the firealarm analogy. It’s still hard to coordinate a pause around. But yeah past ChatGPT your chances are still higher than post GPT-2 or post AlphaGo. I don’t know where I read it but yeah I think I read somewhere from some people involved in Miri explicitly that they are not trying to make the wider public pay too much attention to AI which makes complete sense given how people end up doing stupid things in response to that (the founding of OpenAI etc.). So not making too much of a fuss at that time just seems correct.
To echo @StanislavKrym it seems like there were a few assumptions that pretty easily explain the lack of diversity around this topic.
1. That compute power would not be a bottleneck. (Right now, the means of trying to stop AGI from being built is largely via compute power restriction; if it were not a significant bottleneck, then global agreements about this would be immensely more intrusive and dubious.)
2. That FOOM would be fast, and we would not have multi-year long period where people can actually talk to AI but we have no superintelligence; this kind of period is one of the reasons a “pause” is plausible. (In “There is No Fire Alarm” for instance Yud says that people will only think AGI is imminent when “the AI seeming pretty smart in interaction and conversation; aka the AI actually being an AGI already.” Really that whole article is about how people are going to just not realize things will happen till they happen; it is actually premised on a much faster FOOM.)
3. That there would be “infohazards” about AI, potentially related to AI creation, which making generally known would be enormously net negative. (I.e., MIRI actually stopped publishing because they thought they might have such infohazards. If you’re concerned about drawing attention to these things than an “International Pause” is again plausibly the worst thing you could do, because you’re drawing a bunch of attention to a small space.)
The third seems a bit fuzzier and worse as reason not consider a pause, although I wouldn’t be surprised if it was one such reason. The first two seem to me pretty compelling reasons this option would not be considered. In general, I expect all of these were operating not as explicit considerations but as general hidden steering vectors, keeping this option from being available.
Paul Christiano didn’t have these beliefs/assumptions (and was prominent on LW), and more and more people in AI safety had Paul-like views as DL became more popular/successful through the 2010s (including safety people at DeepMind, founded in 2010, and OpenAI, founded in 2015), but still no significant Pause position or advocacy appeared until 2022.
There is still no firealarm for ASI. How things will work out is still uncertain. The AI talking to you is more like smoke than a firealarm in the firealarm analogy. It’s still hard to coordinate a pause around. But yeah past ChatGPT your chances are still higher than post GPT-2 or post AlphaGo. I don’t know where I read it but yeah I think I read somewhere from some people involved in Miri explicitly that they are not trying to make the wider public pay too much attention to AI which makes complete sense given how people end up doing stupid things in response to that (the founding of OpenAI etc.). So not making too much of a fuss at that time just seems correct.