Free markets are efficient methods of coordination
Markets goodheart for profits, which can lead to negative externalities (and corporate compensation structures can lead to goodhearting for quarterly profits, see Boeing for where this can fail)
Governments must force externalities to be internalized, and governments must coordinate among each other or you simply have an arms race (ideally by taxes and fees that slightly exceed the damages caused by the externalities rather than bans or years wasted waiting for approval)
You propose allowing AI labs to collude, as an exception to antitrust law, but this is unlikely to work because of the defector problem—your proposal creates a large incentive to defect
Counterpoint: As a PhD who wrote a textbook, you likely are aware of at least the basics of cold war history. Nuclear weapons have several significant negative externalities:
a. radioactive waste and contamination of the earth
b. Risk of an unauthorized or accidental use
c. Risk of escalation leading to destroying most of the cities in the developed world
And theoretically governments should have been able to coordinate or collude to build no nuclear weapons. They are clearly a hazard simply to have. “EAs” are worried about existential risks, and nuclear arsenals have until recently been the largest credible x-risk, where the risk of a launch or escalation per year means over a long enough timeline, a major nuclear war is inevitable. Currently the 3 largest arsenals are in the hands of effectively dictators, including the USA president who only requires the consent of 1 other official that the President appointed in order to launch. In addition in the coming US election cycle, voters will choose which elderly dictator to give the nuclear launch codes to, at least one of whom appears to be particularly unstable.
Conclusion: if governments can’t coordinate to reduce nuclear arsenal sizes below ‘assured destruction’ levels, it is difficult to see how meaningful coordination to remove AI risks could happen. This puts governments into the same situation as the early 1950s, where despite all the immense costs, there was no choice but to proceed with building nuclear arsenals and exorbitantly expensive defense systems, including the largest computers ever built (in size): https://en.wikipedia.org/wiki/AN/FSQ-7_Combat_Direction_Central
However, just like the 1950s, plutonium doesn’t need to be sold on the civilian market without restrictions. Very high end hardware for AI, especially specialized hardware able to inference or train the largest neural networks, probably has to be controlled similar to plutonium, where only small samples are available to private businesses.
This wouldn’t mean any slowdowns. Acceleration probably. Just the ai development at government labs with unlimited resources and substantial security instead of private ones.
To use my analogy, if the government didn’t restrict plutonium, private companies would have still taken longer to develop fusion boosted nukes and test them at private ranges with the aim of getting a government contract. A private nuke testing range is going to need a lot of private funding to purchase.
Less innovation but with RSI you probably don’t need new innovation past a variant on current models. (Because you train AIs to learn from data what makes the most powerful AIs)
This will only work if we move past GPUs to ASICs or some other specialized hardware made for training specific AI. GPUs are too useful and widespread in everything else to be controlled that tightly. Even the China ban is being curbed with Chinese companies using shell companies in other countries (obvious if you look at sales #)
Summarizing :
Free markets are efficient methods of coordination
Markets goodheart for profits, which can lead to negative externalities (and corporate compensation structures can lead to goodhearting for quarterly profits, see Boeing for where this can fail)
Governments must force externalities to be internalized, and governments must coordinate among each other or you simply have an arms race (ideally by taxes and fees that slightly exceed the damages caused by the externalities rather than bans or years wasted waiting for approval)
You propose allowing AI labs to collude, as an exception to antitrust law, but this is unlikely to work because of the defector problem—your proposal creates a large incentive to defect
Counterpoint: As a PhD who wrote a textbook, you likely are aware of at least the basics of cold war history. Nuclear weapons have several significant negative externalities:
a. radioactive waste and contamination of the earth
b. Risk of an unauthorized or accidental use
c. Risk of escalation leading to destroying most of the cities in the developed world
And theoretically governments should have been able to coordinate or collude to build no nuclear weapons. They are clearly a hazard simply to have. “EAs” are worried about existential risks, and nuclear arsenals have until recently been the largest credible x-risk, where the risk of a launch or escalation per year means over a long enough timeline, a major nuclear war is inevitable. Currently the 3 largest arsenals are in the hands of effectively dictators, including the USA president who only requires the consent of 1 other official that the President appointed in order to launch. In addition in the coming US election cycle, voters will choose which elderly dictator to give the nuclear launch codes to, at least one of whom appears to be particularly unstable.
Conclusion: if governments can’t coordinate to reduce nuclear arsenal sizes below ‘assured destruction’ levels, it is difficult to see how meaningful coordination to remove AI risks could happen. This puts governments into the same situation as the early 1950s, where despite all the immense costs, there was no choice but to proceed with building nuclear arsenals and exorbitantly expensive defense systems, including the largest computers ever built (in size): https://en.wikipedia.org/wiki/AN/FSQ-7_Combat_Direction_Central
However, just like the 1950s, plutonium doesn’t need to be sold on the civilian market without restrictions. Very high end hardware for AI, especially specialized hardware able to inference or train the largest neural networks, probably has to be controlled similar to plutonium, where only small samples are available to private businesses.
I agree with the analogy in your last paragraph, and this gives hope for governments slowing down AI development, if they have the will.
This wouldn’t mean any slowdowns. Acceleration probably. Just the ai development at government labs with unlimited resources and substantial security instead of private ones.
To use my analogy, if the government didn’t restrict plutonium, private companies would have still taken longer to develop fusion boosted nukes and test them at private ranges with the aim of getting a government contract. A private nuke testing range is going to need a lot of private funding to purchase.
Less innovation but with RSI you probably don’t need new innovation past a variant on current models. (Because you train AIs to learn from data what makes the most powerful AIs)
This will only work if we move past GPUs to ASICs or some other specialized hardware made for training specific AI. GPUs are too useful and widespread in everything else to be controlled that tightly. Even the China ban is being curbed with Chinese companies using shell companies in other countries (obvious if you look at sales #)