I suggest an additional axis of “how hard is world takeover”. Do we live in a vulnerable world? That’s an additional implicit crux (IE:people who disagree here think we need nanotech/biotech/whatever for AI takeover). This ties in heavily with the “AGI/ASI can just do something else” point and not in the direction of more magic.
That does seem like a good axis for identifying cruxes of takeover risk. Though I think “how hard is world takeover” is mostly a function of the first two axes? If you think there are lots of tasks (e.g. creating a digital dictatorship, or any subtasks thereof) which are both possible and tractable, then you’ll probably end up pretty far along the “vulnerable” axis.
I also think the two axes alone are useful for identifying differences in world models, which can help to identify cruxes and interesting research or discussion topics, apart from any implications those different world models have for AI takeover risk or anything else to do with AI specifically.
If you think, for example, that nanotech is relatively tractable, that might imply that you think there are promising avenues for anti-aging or other medical research that involve nanotech, AI-assisted or not.
Though I think “how hard is world takeover” is mostly a function of the first two axes?
I claim almost entirely orthogonal. Examples of concrete disagreements here are easy to find once you go looking:
If AGI tries to take over the world everyone will coordinate to resist
Existing computer security works
Existing physical security works
I claim these don’t reduce cleanly to the form “It is possible to do [x]” because at a high level, this mostly reduces to “the world is not on fire because:”
existing law enforcement discourages effectively (vulnerable world)
existing people are mostly not evil (vulnerable world)
There is some projection onto the axis of “how feasible are things” where we don’t have very good existence proofs.
can an AI convince humans to perform illegal actions
can an AI write secure software to prevent a counter coup
etc.
These are all much much weaker than anything involving nanotechnology or other “indistinguishable from magic” scenarios.
And of course Meta makes everything worse. There was a presentation at Blackhat or Defcon by one of their security guys about how it’s easier to go after attackers than close security holes. In this way they contribute to making the world more vulnerable. I’m having trouble finding it though.
I suggest an additional axis of “how hard is world takeover”. Do we live in a vulnerable world? That’s an additional implicit crux (IE:people who disagree here think we need nanotech/biotech/whatever for AI takeover). This ties in heavily with the “AGI/ASI can just do something else” point and not in the direction of more magic.
As much fun as it is to debate the feasibility of nanotech/biotech/whatever, digital-dictatorships require no new technology. A significant portion of the world is already under the control of human level intelligences (dictatorships). Depending on how stable the competitive equilibrium between agents ends up, required intelligence level before an agent can rapidly grow not in intelligence but in resources and parallelism is likely quite low.
That does seem like a good axis for identifying cruxes of takeover risk. Though I think “how hard is world takeover” is mostly a function of the first two axes? If you think there are lots of tasks (e.g. creating a digital dictatorship, or any subtasks thereof) which are both possible and tractable, then you’ll probably end up pretty far along the “vulnerable” axis.
I also think the two axes alone are useful for identifying differences in world models, which can help to identify cruxes and interesting research or discussion topics, apart from any implications those different world models have for AI takeover risk or anything else to do with AI specifically.
If you think, for example, that nanotech is relatively tractable, that might imply that you think there are promising avenues for anti-aging or other medical research that involve nanotech, AI-assisted or not.
I claim almost entirely orthogonal. Examples of concrete disagreements here are easy to find once you go looking:
If AGI tries to take over the world everyone will coordinate to resist
Existing computer security works
Existing physical security works
I claim these don’t reduce cleanly to the form “It is possible to do [x]” because at a high level, this mostly reduces to “the world is not on fire because:”
existing security measures prevent effectively (not vulnerable world)
vs.
existing law enforcement discourages effectively (vulnerable world)
existing people are mostly not evil (vulnerable world)
There is some projection onto the axis of “how feasible are things” where we don’t have very good existence proofs.
can an AI convince humans to perform illegal actions
can an AI write secure software to prevent a counter coup
etc.
These are all much much weaker than anything involving nanotechnology or other “indistinguishable from magic” scenarios.
And of course Meta makes everything worse. There was a presentation at Blackhat or Defcon by one of their security guys about how it’s easier to go after attackers than close security holes. In this way they contribute to making the world more vulnerable. I’m having trouble finding it though.
This would clearly put my point in a different place from the doomers