An Artificial Intelligence with no drives is a contradiction in terms. It wouldn’t be intelligent. It would be a brick.
If a creature has no drives, it does not feel the need to do anything—this is why gengineering your creatures to remove their need for food or making them super-resistant to disease can result in a very boring game!
An Artificial Intelligence with no drives is a contradiction in terms. It wouldn’t be intelligent. It would be a brick.
I am not saying that the most likely result of AGI research are AI’s with no drives. I am saying that taking over the world is a very complex business that doesn’t just happen as a side-effect or unforeseen implication of some information theoretically simple AGI design. Pretty much nothing that actually works efficiently happens as an unforeseen side-effect.
Taking over the world is instrumentally useful, regardless of your goals. If it has drives (and sufficient intelligence), it will take over the world. If it doesn’t, it’s not AI.
This isn’t something that just happens while it tries to accomplish its goal in another manner. It’s a subgoal. It will accomplish its subgoals efficiently, and it’s entirely possible for someone to miss one.
Only if there isn’t some mutually exclusive subgoal that more efficiently achieves its goals. It may turn out that taking over the Earth isn’t the most efficient way to tile interstellar space with paperclips, for example, in which case a good enough optimizer will forego taking over the Earth in favor of the more efficient way to achieve its goals.
It’s possible that it won’t take over Earth fist. It would likely start with asteroids to save on shipping, and if FTL is cheap enough, it might use other stars’ asteroids before it resorts to the planets. But it will resort to the planets eventually.
An Artificial Intelligence with no drives is a contradiction in terms. It wouldn’t be intelligent. It would be a brick.
-- Creatures Wiki
I am not saying that the most likely result of AGI research are AI’s with no drives. I am saying that taking over the world is a very complex business that doesn’t just happen as a side-effect or unforeseen implication of some information theoretically simple AGI design. Pretty much nothing that actually works efficiently happens as an unforeseen side-effect.
Taking over the world is instrumentally useful, regardless of your goals. If it has drives (and sufficient intelligence), it will take over the world. If it doesn’t, it’s not AI.
This isn’t something that just happens while it tries to accomplish its goal in another manner. It’s a subgoal. It will accomplish its subgoals efficiently, and it’s entirely possible for someone to miss one.
Only if there isn’t some mutually exclusive subgoal that more efficiently achieves its goals. It may turn out that taking over the Earth isn’t the most efficient way to tile interstellar space with paperclips, for example, in which case a good enough optimizer will forego taking over the Earth in favor of the more efficient way to achieve its goals.
Of course, that’s not something to count on.
You normally have to at least arrest any nearby agents that might potentially thwart your own efforts.
It’s possible that it won’t take over Earth fist. It would likely start with asteroids to save on shipping, and if FTL is cheap enough, it might use other stars’ asteroids before it resorts to the planets. But it will resort to the planets eventually.