I think today there’s still a window of opportunity to stop AI without creating a world government. To build an AI today requires a huge “supercritical pile of GPUs” so to speak, which is costly and noticeable like uranium. But software advances can change that. So it’d be best to take the hardware off the table soon, with the same kind of international effort as stopping nuclear proliferation. But realistically, humanity won’t pass such a measure without getting a serious scare first. And there’s a high chance the first serious scare just kills us.
The only problem is that this would further accelerate the pressure to produce software advances. Certain software improvements are not being done maximally fast at the moment, because the industry leaders are overrelying on the “bitter lesson” and on the huge fleets of GPU in a somewhat brute force fashion.
(Michael Pollan in his “The Botany of Desire” explains how drug prohibition has resulted in much faster advances towards very strong modern cannabis by creating pressure to produce a stronger punch per unit of weight and volume.
People look at nuclear non-proliferation as a semi-successful example of prohibition, but the situation might be closer to our drug war. It’s easy to target the AI leaders with their large fleets of GPUs and large teams, just like it’s feasible to regulate the big pharma. It might be way more difficult to figure out who are all the small groups all over the world pursuing non-saturating recursive self-improvement of scaffoldings and such on top of already released open weight LLMs. A failed prohibition is likely to lower the odds of reasonable outcome by making the identity of the winner and the nature of the winning approach very unpredictable.)
I would like to draw a strong distinction between a “world government” and an organization capable of effecting international AGI race de-escalation. I don’t think you were exactly implying that the former is necessary for the latter, but since the former seems implausible and the latter necessary for humanity to survive, it seems good to clearly distinguish.
I think today there’s still a window of opportunity to stop AI without creating a world government. To build an AI today requires a huge “supercritical pile of GPUs” so to speak, which is costly and noticeable like uranium. But software advances can change that. So it’d be best to take the hardware off the table soon, with the same kind of international effort as stopping nuclear proliferation. But realistically, humanity won’t pass such a measure without getting a serious scare first. And there’s a high chance the first serious scare just kills us.
The only problem is that this would further accelerate the pressure to produce software advances. Certain software improvements are not being done maximally fast at the moment, because the industry leaders are overrelying on the “bitter lesson” and on the huge fleets of GPU in a somewhat brute force fashion.
(Michael Pollan in his “The Botany of Desire” explains how drug prohibition has resulted in much faster advances towards very strong modern cannabis by creating pressure to produce a stronger punch per unit of weight and volume.
People look at nuclear non-proliferation as a semi-successful example of prohibition, but the situation might be closer to our drug war. It’s easy to target the AI leaders with their large fleets of GPUs and large teams, just like it’s feasible to regulate the big pharma. It might be way more difficult to figure out who are all the small groups all over the world pursuing non-saturating recursive self-improvement of scaffoldings and such on top of already released open weight LLMs. A failed prohibition is likely to lower the odds of reasonable outcome by making the identity of the winner and the nature of the winning approach very unpredictable.)
I would like to draw a strong distinction between a “world government” and an organization capable of effecting international AGI race de-escalation. I don’t think you were exactly implying that the former is necessary for the latter, but since the former seems implausible and the latter necessary for humanity to survive, it seems good to clearly distinguish.