Nice post! I like the ladder metaphor.
For events, one saving grace is that many people actively dislike events getting too large and having too many people, and start to long for the smaller cozier version at that point. So instead of the bigger event competing with the smaller one and drawing people away from it, it might actually work the other way around, with the smaller event being that one that “steals” people from the bigger one.
Hmm, I feel like I always had something like this as one of my default scenarios. Though it would of course have been missing some key details such as the bit about model release culture, since that requires the concept of widely applicable pre-trained models that are released the way they are today.
E.g. Sotala & Yampolskiy 2015 and Sotala 2018 both discussed there being financial incentives to deploy increasingly sophisticated narrow-AI systems until they finally crossed the point of becoming AGI.
S&Y 2015:
And with regard to the difficulty of regulating them, S&Y 2015 mentioned that:
and in the context of discussing AI boxing and oracles, argued that both AI boxing and Oracle AI are likely to be of limited (though possibly still some) value, since there’s an incentive to just keep deploying all AI in the real world as soon as it’s developed:
I also have a distinct memory of writing comments saying something “why does anyone bother with ‘the AI could escape the box’ type arguments, when the fact that financial incentives would make the release of those AIs inevitable anyway makes the whole argument irrelevant”, but I don’t remember whether it was on LW, FB or Twitter and none of those platforms has a good way of searching my old comments. But at least Sotala 2018 had an explicit graph showing the whole AI boxing thing as just one way by which the AI could escape, that was irrelevant if it was released otherwise: