My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
The control required within each country to enforce such a ban breaks the analogy to nuclear non-proliferation.
Uranium is an input to a general purpose technology (electricity), but it is not a general purpose technology itself, so it is possible to control its enrichment without imposing authoritarian controls on every person and industry in their use of electricity. By contrast, AI chips are themselves a general purpose technology, and exerting the proposed degree of control would entail draconian limits on every person and industry in society.
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
Fair enough, but China and the US are not going to risk war over that unless they believe doom is anywhere close to as certain as Eliezer believes it to be. And they are not going to believe that, in part because that level of certainty is not justified by any argument anyone including Eliezer has provided. And even if I am wrong on the inside view/object level to say that, there is enough disagreement about that claim among AI existential risk researchers that the outside view of a national government is unlikely to fully adopt Eliezer’s outlier viewpoint as its own.
But in return, we now have the tools of authoritarian control implemented within each participating country. And this is even if they don’t use their control over the computing supply to build powerful AI solely for themselves. Just the regime required to enforce such control would entail draconian invasions into the lives of every person and industry.
The control required within each country to enforce such a ban breaks the analogy to nuclear non-proliferation.
Uranium is an input to a general purpose technology (electricity), but it is not a general purpose technology itself, so it is possible to control its enrichment without imposing authoritarian controls on every person and industry in their use of electricity. By contrast, AI chips are themselves a general purpose technology, and exerting the proposed degree of control would entail draconian limits on every person and industry in society.
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
Fair enough, but China and the US are not going to risk war over that unless they believe doom is anywhere close to as certain as Eliezer believes it to be. And they are not going to believe that, in part because that level of certainty is not justified by any argument anyone including Eliezer has provided. And even if I am wrong on the inside view/object level to say that, there is enough disagreement about that claim among AI existential risk researchers that the outside view of a national government is unlikely to fully adopt Eliezer’s outlier viewpoint as its own.
But in return, we now have the tools of authoritarian control implemented within each participating country. And this is even if they don’t use their control over the computing supply to build powerful AI solely for themselves. Just the regime required to enforce such control would entail draconian invasions into the lives of every person and industry.