Intent-aligned multipolar ASI has slightly different logic, and I think it’s part of the vague hopes accelerationists hold for muddling through a multipolar ASI scenario.
I don’t want to sound like I’m defending the worldviews you’re challenging, because I think they’re most often based on inadequate consideration of the relevant factors. The challenge is to get proponents to actually come to grips with the principled reasons you describe that lead to bad outcomes.
One variant of the “we invent ASI and muddle through” is the expectation that it will remain under human control. This is distrubingly muddled with the hopes you debunk, but it deserves to be treated separately.
If we get alignment sort-of right by creating ASI that primarily follows instructions, we have some of the same problems (humans competing with superhuman ASI servants). This competition has a distrubing tendency to favor the most vicious humans. That’s analogous to the problem you describe, which is caring about humans a little being lost as competition favors other goals.
Most of the same problems exist; to survive, we’d need an enforceable social contract preventing anyone from ordering their ASI to create hidden facilities where it could self-improve, build weapons, and take over. I don’t know if that’s possible.
If it’s not, or we don’t bother to try it, I think we get predictably horrible outcomes where the most vicious humans whe get control of an ASI (through fair means or foul) attack first and become god-emperor of the lightcone, implementing their personal utopia. We can hope their sadism-empathy balance isn’t too bad.
If we do set up an enforceable rule-based system of managed competition, we’d be in a scenario somewhat like the past, but with positive and negative differences.
Downside: powerful humans have no need to preserve humans without power
Upside: should they want to, they’ll have so much power that preserving powerless humans is trivially easy.
Hopefully, the social contract that keeps them all alive includes a proviso “and we agree to contribute to preserving the plebians.”
This isn’t the glorious anarchic utopia that accelerationists hope for, but neither is the current day or any point in history. There are power structures in an organized power-sharing agreement that allow substantial individual freedom and competition.
Intent-aligned multipolar ASI has slightly different logic, and I think it’s part of the vague hopes accelerationists hold for muddling through a multipolar ASI scenario.
I don’t want to sound like I’m defending the worldviews you’re challenging, because I think they’re most often based on inadequate consideration of the relevant factors. The challenge is to get proponents to actually come to grips with the principled reasons you describe that lead to bad outcomes.
One variant of the “we invent ASI and muddle through” is the expectation that it will remain under human control. This is distrubingly muddled with the hopes you debunk, but it deserves to be treated separately.
If we get alignment sort-of right by creating ASI that primarily follows instructions, we have some of the same problems (humans competing with superhuman ASI servants). This competition has a distrubing tendency to favor the most vicious humans. That’s analogous to the problem you describe, which is caring about humans a little being lost as competition favors other goals.
Most of the same problems exist; to survive, we’d need an enforceable social contract preventing anyone from ordering their ASI to create hidden facilities where it could self-improve, build weapons, and take over. I don’t know if that’s possible.
If it’s not, or we don’t bother to try it, I think we get predictably horrible outcomes where the most vicious humans whe get control of an ASI (through fair means or foul) attack first and become god-emperor of the lightcone, implementing their personal utopia. We can hope their sadism-empathy balance isn’t too bad.
If we do set up an enforceable rule-based system of managed competition, we’d be in a scenario somewhat like the past, but with positive and negative differences.
Downside: powerful humans have no need to preserve humans without power
Upside: should they want to, they’ll have so much power that preserving powerless humans is trivially easy.
Hopefully, the social contract that keeps them all alive includes a proviso “and we agree to contribute to preserving the plebians.”
This isn’t the glorious anarchic utopia that accelerationists hope for, but neither is the current day or any point in history. There are power structures in an organized power-sharing agreement that allow substantial individual freedom and competition.