More generally, I find the post quite good and thoughtfully written, but I find the arguments brought in “Why I’m skeptical of the prophecy” weak.
The same view is behind the argument that all our prevention and countermeasures will fail: the AI will outsmart you, manipulate you, outmaneuver you, etc. As Scott Aaronson points out, this is a “fully general counterargument” to anything that might work.
Pretty much, but that doesn’t make it invalid: If someone who’s smarter (more precisely, has a broader range of options and knows how to use them) than you is coming for you, you’re in trouble. That’s not saying that e.g. AI-boxing wouldn’t be strictly better than not doing it, but it doesn’t keep you safe if the AI is sufficiently smart and has any way of interacting with the world.
When we think about Western empires or alien invasions, what makes one side superior is not raw intelligence, but the results of that intelligence compounded over time, in the form of science, technology, infrastructure, and wealth.
This seems to be un-founded. For one, I’m not sure we ever had a situation where two empires had a substantial difference in pure intellectual capability and everything else was kept equal. Additionally, if you look at the Spanish conquests of the Americas, they did a scary amount of conquering simply by outsmarting the locals. You could argue that they had prior real-world experience, but I don’t think that’s such an easy argument to sustain.
AI, no matter how intelligent, will not start out with a compounding advantage.
If it has access to the internet, you can assume it starts with all of human knowldege plus the ability to quickly process, organize and re-combine it, which already puts it beyond the capabilities of humanity.
More generally, I find the post quite good and thoughtfully written, but I find the arguments brought in “Why I’m skeptical of the prophecy” weak.
Pretty much, but that doesn’t make it invalid: If someone who’s smarter (more precisely, has a broader range of options and knows how to use them) than you is coming for you, you’re in trouble. That’s not saying that e.g. AI-boxing wouldn’t be strictly better than not doing it, but it doesn’t keep you safe if the AI is sufficiently smart and has any way of interacting with the world.
This seems to be un-founded. For one, I’m not sure we ever had a situation where two empires had a substantial difference in pure intellectual capability and everything else was kept equal. Additionally, if you look at the Spanish conquests of the Americas, they did a scary amount of conquering simply by outsmarting the locals. You could argue that they had prior real-world experience, but I don’t think that’s such an easy argument to sustain.
If it has access to the internet, you can assume it starts with all of human knowldege plus the ability to quickly process, organize and re-combine it, which already puts it beyond the capabilities of humanity.