Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.
That is very unclear, and people’s politics seems a good predictor of their opinions in “competing intelligences” scenarios, meaning that nobody really has a clue.
My intuition is that a single narrowly focused specialized intelligence might have enough flaws to be tricked or outmaneuvered by humanity, for example if an agent wanted to maximize production of paperclips but was average or poor at optimizing mining, exploration, and research it could be cornered and destroyed before it discovered nanotechnology or space travel and asteroids and other planets and spread out of control. Multiple competing intelligences would explore more avenues of optimization, making coordination against them much more difficult and likely interfering with many separate aspects of any coordinated human plan.
That is very unclear, and people’s politics seems a good predictor of their opinions in “competing intelligences” scenarios, meaning that nobody really has a clue.
My intuition is that a single narrowly focused specialized intelligence might have enough flaws to be tricked or outmaneuvered by humanity, for example if an agent wanted to maximize production of paperclips but was average or poor at optimizing mining, exploration, and research it could be cornered and destroyed before it discovered nanotechnology or space travel and asteroids and other planets and spread out of control. Multiple competing intelligences would explore more avenues of optimization, making coordination against them much more difficult and likely interfering with many separate aspects of any coordinated human plan.