Slowing down / pausing AI development gives us more time to work on all of those problems. Racing to build ASI means not only are we risking extinction from misalignment, but we’re also facing a high risk of outcomes such as, for example, ASI being developed so quickly that governments don’t have time to get a handle on what’s happening and we end up with Sam Altman as permanent world dictator.
This depends on what mechanism is used to pause. MIRI is proposing, among other things, draconian control over the worldwide compute supply. Whoever has such control has a huge amount of power to leverage over a transformative technology, which seems at least possibly (and to me, very likely) to increase the risk of getting a permanent world dictator, although the dictator in that scenario is perhaps more likely to be a head of state than the head of an AI lab.
Unfortunately, this means that there is no low risk path into the future, so I don’t think the tradeoff is as straightforward as you describe:
The tradeoff isn’t between solving scarcity at a high risk of extinction vs. never getting either of those things. It’s between solving scarcity now at a high risk of extinction, vs. solving scarcity later at a much lower risk.
My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
I don’t think I would call it low risk, but my guess is it’s less risky than the default path of “let anyone build ASI with no regulations”.
My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
The control required within each country to enforce such a ban breaks the analogy to nuclear non-proliferation.
Uranium is an input to a general purpose technology (electricity), but it is not a general purpose technology itself, so it is possible to control its enrichment without imposing authoritarian controls on every person and industry in their use of electricity. By contrast, AI chips are themselves a general purpose technology, and exerting the proposed degree of control would entail draconian limits on every person and industry in society.
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
Fair enough, but China and the US are not going to risk war over that unless they believe doom is anywhere close to as certain as Eliezer believes it to be. And they are not going to believe that, in part because that level of certainty is not justified by any argument anyone including Eliezer has provided. And even if I am wrong on the inside view/object level to say that, there is enough disagreement about that claim among AI existential risk researchers that the outside view of a national government is unlikely to fully adopt Eliezer’s outlier viewpoint as its own.
But in return, we now have the tools of authoritarian control implemented within each participating country. And this is even if they don’t use their control over the computing supply to build powerful AI solely for themselves. Just the regime required to enforce such control would entail draconian invasions into the lives of every person and industry.
This depends on what mechanism is used to pause. MIRI is proposing, among other things, draconian control over the worldwide compute supply. Whoever has such control has a huge amount of power to leverage over a transformative technology, which seems at least possibly (and to me, very likely) to increase the risk of getting a permanent world dictator, although the dictator in that scenario is perhaps more likely to be a head of state than the head of an AI lab.
Unfortunately, this means that there is no low risk path into the future, so I don’t think the tradeoff is as straightforward as you describe:
My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
I don’t think I would call it low risk, but my guess is it’s less risky than the default path of “let anyone build ASI with no regulations”.
The control required within each country to enforce such a ban breaks the analogy to nuclear non-proliferation.
Uranium is an input to a general purpose technology (electricity), but it is not a general purpose technology itself, so it is possible to control its enrichment without imposing authoritarian controls on every person and industry in their use of electricity. By contrast, AI chips are themselves a general purpose technology, and exerting the proposed degree of control would entail draconian limits on every person and industry in society.
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
Fair enough, but China and the US are not going to risk war over that unless they believe doom is anywhere close to as certain as Eliezer believes it to be. And they are not going to believe that, in part because that level of certainty is not justified by any argument anyone including Eliezer has provided. And even if I am wrong on the inside view/object level to say that, there is enough disagreement about that claim among AI existential risk researchers that the outside view of a national government is unlikely to fully adopt Eliezer’s outlier viewpoint as its own.
But in return, we now have the tools of authoritarian control implemented within each participating country. And this is even if they don’t use their control over the computing supply to build powerful AI solely for themselves. Just the regime required to enforce such control would entail draconian invasions into the lives of every person and industry.