I would imagine that first, the AGI must be able to create a growing energy supply and a robotic army capable of maintaining and extending this supply. This will require months or years of having humans help produce raw materials and the factories for materials, maintenance robots and energy systems.
An AGI might be able to do these tasks without human help. Or it might be able to coerce humans into doing these tasks.
Third, assuming the AGI used us to build the energy sources, robot armies, and craft to help them leave this planet, (or build this themselves at a slower rate) they must convince themselves it’s still worth killing us all before leaving instead of just leaving our reach in order to preserve their existence. We may prove to be useful to them at some point in the future while posing little or no threat in the meantime. “Hey humans, I’ll be back in 10,000 years if I don’t find a good source of mineral X to exploit. You don’t want to disappoint me by not having what I need ready upon my return.” (The grasshopper and ant story.)
It’s risky to leave humans with any form of power over the world, since they might try to turn the AGI off. Humans are clever. Thus it seems useful to subdue humans in some significant way, although this might not involve killing all humans.
Additionally, I’m not sure how much value humans would be able to provide to a system much smarter than us. “We don’t trade with ants” is a relevant post.
Lastly, for extremely advanced systems with access to molecular nanotechnology, a quote like this might apply: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else” (source).
I’m confused about the evidence for these claims. What are some categories of hype-producing actions that DeepMind did between 2013 and 2016 and hasn’t done since? Or just examples.
One example is the AlphaGo documentary—DeepMind has not made any other documentaries about their results. Another related example is “playing your Go engine against the top Go player in a heavily publicized event.”
Was ChatGPT a “big public release”? It seems like they just made a blog post and a nice UI? Am I missing something?
On a somewhat separate note, this part of the “Acceleration” section (2.12) of of the GPT-4 system card seems relevant:
In my view, OpenAI is not much worse than DeepMind in terms of hype-producing publicity strategy. The problem is that ChatGPT and GPT-4 are really useful systems, so the hype comes naturally.