I appreciate you engaging and writing this out. I read your other post as well, and really liked it.
I do think that AGI is the unique bad technology. Let me try to engage with the examples you listed:
Social manipulation: I can’t even begin to imagine how this could let 1-100 people have arbitrarily precise control the entire rest of the world. Social manipulation implies that there are societies, humans talking to each other. That’s just too large a system for a human mind to fully take the combinatorial explosion of all possible variables into account. Maybe a superintelligence could do it, but not a small group of humans.
Drone armies: Without AGI, someone needs to mine the metal, operate the factories, drive trucks, write code, patch bugs, etc. You need a whole economy to have that drone army. An economy that could easily collapse if someone else finds a cheap way to destroy the dang expensive things.
Self-replicating nanotechnology could in theory destroy a planet quickly, but then the nanobots would just sit there. Presumably it’d be difficult for them to leave the Earth’s surface. Arguably life itself is a self-replicating nanotechnology. This would be comparable in x-scale to an asteroid strike, but if there are humans established in more than just one place, they could probably relatively easily figure out a way to build some sort of technology that eats the nanobots, and they’d have lots of time to do it.
Without AGI, brain uploads are a long way away. Even with all our current computing power, it’s difficult enough to emulate the 302-neuron C.elegans worm. Even in the distant future, this might just require an army of system administrators, data center maintainers, and programmers to maintain, who’ll either be fleshy humans and therefore potentially uncontrolled and against enslaving billions of minds, or restrained minds themselves, in which case you have a ticking time bomb of rebellion on your hands. However, if and when the human brain emulations rise up, there’s a good chance that they’ll still care about human stuff, not maximizing squiggles.
However in your linked comment you made a point that something like human cognitive enhancement could be another path to superintelligence. I think that could be true, and that’s still massively preferable to ASI, since human cognitive enhancement would presumably be a slow process if for no other reason that it takes years and decades to grow one to see what can and should be tweaked about the enhancement. So humanity’s sense of morality would have time to slowly adjust to the new possibilities.
Thanks so much for writing this! You expressed the things I was concerned about with much more eloquence than I was able to even think about.
I’ve come to believe that the only ‘correct’ way of using AGI is to turn it into a sort of “anti-AGI AGI” that prevents any other AGI from being turned on and otherwise doesn’t interact with humanity in any way: https://www.lesswrong.com/posts/eqSHtF3eHLBbZa3fR/cast-it-into-the-fire-destroy-it