Delivering an impassioned argument that AI will kill everyone culminating in a plea for a global treaty is like delivering an impassioned argument that a full-on war between drug cartels is about to start on your street culminating with a plea for a stern resolution from the homeowner’s association condemning violence. A treaty cannot do the thing they ask.
Could you suggest an alternate solution which actually ensures that no one builds the ASI? If there’s no such solution, then someone will build it and we’ll be only able to pray for alignment techniques to have worked. [1]
No, I can’t. And I suspect that if the authors conducted a more realistic political analysis, the book might just be called “Everyone’s Going to Die.”
But, if you’re trying to come up with an idea that’s at least capable of meeting the magnitude of the asserted threat, then you’d consider things like:
Find a way to create a world government (a nigh-impossible ask to be sure) and then use it to ban AI.
Force anyone with relevant knowledge of how to build an AI to go into some kind of tech-free monastery and hunt anyone who refuses down with ten times the ferocity used in going after Al Qaeda after 9/11.
And then you just have to bite the bullet and accept that if these entail a risk of a nuclear war with China, then you fight a nuclear war with China. I don’t think either of those would really work out either, but at least they could work out.
If there is some clever idea out there for how to achieve an AI shutdown, I suspect it involves some way of ensuring that developing AI is economically unprofitable. I personally have no idea how to do that, but unless you cut off the financial incentive, someone’s going to do it.
The book spends a long time talking about what the minimum viable policy might look like, and comes to the conclusion that it’s more like:
The US, China and Russia (are Russia even necessary? can we use export controls? Russia has a GDP less than, like, Italy. India is the real third player here IMO) agree that anyone who builds a datacenter they can’t monitor gets hit with a bunker-buster.
This is unlikely. But it’s several OOMs less effort than buidling a world government on everything.
It made me realize a possibility—strategic cooperation on AI, between Russia and India. They have a history of goodwill, and right now India is estranged from America. (Though Anthropic’s Amodei recently met Modi.) The only problem is, neither Russia nor India is a serious chip maker, so like everyone else they are dependent on the American and Chinese supply chains...
It’s not a quote no, but it’s the overall picture they gave (I have removed quotation marks now) They made it pretty clear that a few large nations cooperating just on AGI non-creation is enough.
An AI treaty would globally shift the overton window on AI safety, making more extreme measures more palatable in the future. The options you listed are currently way outside the overton window and are therefore bad solutions and don’t even get us closer to a good solution because they simply couldn’t happen.
Could you suggest an alternate solution which actually ensures that no one builds the ASI? If there’s no such solution, then someone will build it and we’ll be only able to pray for alignment techniques to have worked. [1]
Creating an aligned ASI will also lead to problems like potential power grabs and the Intelligence Curse.
No, I can’t. And I suspect that if the authors conducted a more realistic political analysis, the book might just be called “Everyone’s Going to Die.”
But, if you’re trying to come up with an idea that’s at least capable of meeting the magnitude of the asserted threat, then you’d consider things like:
Find a way to create a world government (a nigh-impossible ask to be sure) and then use it to ban AI.
Force anyone with relevant knowledge of how to build an AI to go into some kind of tech-free monastery and hunt anyone who refuses down with ten times the ferocity used in going after Al Qaeda after 9/11.
And then you just have to bite the bullet and accept that if these entail a risk of a nuclear war with China, then you fight a nuclear war with China. I don’t think either of those would really work out either, but at least they could work out.
If there is some clever idea out there for how to achieve an AI shutdown, I suspect it involves some way of ensuring that developing AI is economically unprofitable. I personally have no idea how to do that, but unless you cut off the financial incentive, someone’s going to do it.
The book spends a long time talking about what the minimum viable policy might look like, and comes to the conclusion that it’s more like:
The US, China and Russia (are Russia even necessary? can we use export controls? Russia has a GDP less than, like, Italy. India is the real third player here IMO) agree that anyone who builds a datacenter they can’t monitor gets hit with a bunker-buster.
This is unlikely. But it’s several OOMs less effort than buidling a world government on everything.
Is that a quote from IABIED?
It made me realize a possibility—strategic cooperation on AI, between Russia and India. They have a history of goodwill, and right now India is estranged from America. (Though Anthropic’s Amodei recently met Modi.) The only problem is, neither Russia nor India is a serious chip maker, so like everyone else they are dependent on the American and Chinese supply chains...
It’s not a quote no, but it’s the overall picture they gave (I have removed quotation marks now) They made it pretty clear that a few large nations cooperating just on AGI non-creation is enough.
I’d describe this more like “this would make a serious dent in the problem”, enough to be worth the costs. “Enough” is a strong word.
An AI treaty would globally shift the overton window on AI safety, making more extreme measures more palatable in the future. The options you listed are currently way outside the overton window and are therefore bad solutions and don’t even get us closer to a good solution because they simply couldn’t happen.