I simply disagree that one must avoid relying on facts like: infectious disease can kill millions; devices can be hacked; people can be manipulated by an email or by a talking head on a screen. Hopefully most people wouldn’t actually dispute any of that, and would instead be objecting to other aspects of a proposed scenario, like the escalation of those real phenomena to the point of extinction, or the possibility of an AI smart enough and determinedly malevolent enough to carry out such an escalation.
I think most people would agree infectious disease could kill millions. But killing billions/everyone feels very different than that. Covid didn’t kill billions, so why would AI?
I think most people think hackers can break into someone’s Facebook account or scam someone, but they don’t think a hacker could plausibly hack us in ways like gaining access to nuclear weapons or shutting down infrastructure at scale. North Korea can’t destroy us with hacking, so why can AI?
I think most people could believe someone could be manipulated, but not easily manipulated like a puppet.
If you end up talking about infectious disease, hacking, or manipulation, the nerfed version people are willing to believe will make it sound like a fair fight when it isn’t. You could probably talk someone into believing the harder version of these things, but I wouldn’t put them in the 1 minute elevator pitch.
The easier version of the argument about “manipulated like a puppet” to swallow is “the ASI just needs to find someone that it can talk into doing what it wants, even if you personally wouldn’t.”
I simply disagree that one must avoid relying on facts like: infectious disease can kill millions; devices can be hacked; people can be manipulated by an email or by a talking head on a screen. Hopefully most people wouldn’t actually dispute any of that, and would instead be objecting to other aspects of a proposed scenario, like the escalation of those real phenomena to the point of extinction, or the possibility of an AI smart enough and determinedly malevolent enough to carry out such an escalation.
I think most people would agree infectious disease could kill millions. But killing billions/everyone feels very different than that. Covid didn’t kill billions, so why would AI?
I think most people think hackers can break into someone’s Facebook account or scam someone, but they don’t think a hacker could plausibly hack us in ways like gaining access to nuclear weapons or shutting down infrastructure at scale. North Korea can’t destroy us with hacking, so why can AI?
I think most people could believe someone could be manipulated, but not easily manipulated like a puppet.
If you end up talking about infectious disease, hacking, or manipulation, the nerfed version people are willing to believe will make it sound like a fair fight when it isn’t. You could probably talk someone into believing the harder version of these things, but I wouldn’t put them in the 1 minute elevator pitch.
The easier version of the argument about “manipulated like a puppet” to swallow is “the ASI just needs to find someone that it can talk into doing what it wants, even if you personally wouldn’t.”