When discussing AI doom barriers propose specific plausible scenarios

TLDR:When addressing “How would an AI do X” questions, prefer simple existence proofs involving things humans can currently do rather than “magic” seeming things like nanotechnology or the fully general “AI will just be smart enough to figure it out”. TLDR:END

Commonly in AI risk conversations questions of the form “how would an AI do X” where X is something like “control physical resources) or (survive without humans) are asked.

There is a “would/​could” distinction in these questions.

  • How would an ASI do X -->(predict actions of ASI pursuing goal X)

  • How could an ASI do X --> (Show that an AI can accomplish X)

The “would” question is correct to answered with (nanotech/​super-persuasion/​Error:cannot_predict_ASI) but the “could” form is usually the intention and leads to better conversations.

For “would” questions, the answer can be prefaced with “The AI would be cleverer than me so I can’t predict what it would actually do. One way it could do X though is to …”

Hypothetical example

Doubter: How will the AI take over if it’s stuck inside the internet.

Doomer: The AI is smart enough to figure out a solution like nanotechnology or something.

Doubter: I don’t believe you. I don’t think it can do that.

Doomer: *hands them a copy of the Nanosystems book*

Doubter: I never took CHEM101 and besides no one has ever built this so I still don’t believe you.

Compare

Doubter: How will the AI take over if it’s stuck inside the internet.

Doomer: It can just hack all the computers and hold them and the data hostage.

Doubter: We would refuse it’s demands!

Doomer: Here’s a list of stuff connected to the internet, how hard that stuff is to fix if hacked and examples of human hackers hacking that stuff.

The first reply is a fully general argument that intelligence can solve most problems. If you’re wondering how the AI could make ice cubes and you don’t know that freezers are possible, use the fully general argument. The AI will just be smart enough to invent some new technology that can make ice cubes. If you do know of an existing solution then just point to that as an existence proof. “The AI can order an ice-maker off Amazon and hire a taskrabbit to set it up and operate it.”

Concern: seeing only the easy to stop attacks

That’s a problem with such scenarios in general. By trying to be as reasonable and grounded as possible, by showing how little it takes, you anchor people on that little being all you have. The more one tries to match people’s instincts for ‘realistic’ the less actually realistic you are presenting the situation, and the further you enforce that distinction.

So if before they didn’t think AI would be able to do X. Now they think AI could do X but we can and will just stop it by putting in a barrier that stops a specific scenario Y. If we by some miracle fix computer security, the users won’t be any less vulnerable to blackmail and social engineering.

I still think it is better if people believe there is a real threat even if they believe it can be solved with some specific countermeasure rather than thinking something much more ridiculous like “AI won’t be able to affect the real world”.

One Motivating Example

Eliezer ted talk Q/​A

Chris Anderson: For this to happen, for an AI to basically destroy humanity, it has to break out, escape controls of the internet and, you know, start commanding actual real-world resources. You say you can’t predict how that will happen, but just paint one or two possibilities.

Eliezer Yudkowsky: OK, so why is this hard? First, because you can’t predict exactly where a smarter chess program will move. Maybe even more importantly than that, imagine sending the design for an air conditioner back to the 11th century. Even if it’s enough detail for them to build it, they will be surprised when cold air comes out because the air conditioner will use the temperature-pressure relation and they don’t know about that law of nature. [...]

EY is correctly provides the “Error:cannot_predict_ASI” response saying that an ASI will likely use some unanticipated better solution involving natural laws we don’t understand well.

The implicit question of “can an AI do X” (obviously YES) is not addressed unless the provided justification of “AI is very smart and so can do most possible things” argument can be made which is quite hard to do. It is not necessarily clear to the other party that this is even possible.

Here’s how I would answer the question:

My answer: An AI much smarter than me could obviously find a better strategy but if the goal is to take over the world, internet connected devices are mostly enough. An AI that takes over internet connected devices can shut them down to force compliance. In addition to the phones, computers and payment terminals, smart electricity meters can cut power to buildings and cars can be disabled or made to crash.

Killing all the humans while keeping the electricity and chip factories on is harder. It might require building a lot of robots but seems feasible with current technology. A lazy AI could just wait till we build the tools it needs. The AI will be smarter of course. It will come up with a better solution, but the problem can be solved in principle.

Disclaimers

  • I had time to think/​edit. EY did not. Still, I wish EY would change his default argument for this sort of question.

  • generating >> editing.

    • Generating a full X is much harder than optimising an existing X

    • my posting this does not mean “I am a better arguer in full generality” (I’m not).

    • This is just an obvious to me mistake

  • The danger of focusing others on addressable risks could make this a bad idea.

No comments.