Sure, it is theoretically possible to establish a competition between millions of superintelligences with conflicting goals that doesn’t end in disaster
What do you mean by “competition”? The millions are each trying to maximize their own goals, but usually don’t care to suppress others’ goals. Cooperation in situations of limited resources rather than expending resources fighting is, I think, universal—in general game theory would apply to smarter and stronger beings as it does to us, with differences being of the type “AIs can merge as a way of cooperating, though humans can’t,” but not differences of the type “With beings of silicon substrate, cooperation is always inferior to conflict”.
OsamaBot concluded that the best possible negotiated agreement was going to be worse than just blowing up the planet
I don’t think his extrapolated volition would endorse that. I don’t think theism could survive extrapolated cognition.
spore cloud
There is an illusion of transparency here because I do not know what that means. Is that a purely destructive thing, is it supposed to combine destruction with “planting” baby AIs like the one that produced it, or what?
How does having millions of approximate equals impact the recursive self improvement cycle?
I think it would motivate merging. That’s what happened with biological cells and tribes of humans.
with which they can trade between each other.
I don’t see why they only trade in your scenario (or would only fight in mine). I don’t see how you would program the individual AI to divide the universe into slices and enforce some rules among individuals. This seems like the standard case of giving a singleton a totally alien value set after which it tiles the universe with smiley faces or equivalent.
I don’t see how it’s directly comparable to creating millions of AIs.
I don’t think his extrapolated volition would endorse that. I don’t think theism could survive extrapolated cognition.
You cannot assume that the volitions of millions of agents will not include something catastrophically bad for you. “Extrapolated Volition” doesn’t make people nice.
What do you mean by “competition”? The millions are each trying to maximize their own goals, but usually don’t care to suppress others’ goals. Cooperation in situations of limited resources rather than expending resources fighting is, I think, universal—in general game theory would apply to smarter and stronger beings as it does to us, with differences being of the type “AIs can merge as a way of cooperating, though humans can’t,” but not differences of the type “With beings of silicon substrate, cooperation is always inferior to conflict”.
I don’t think his extrapolated volition would endorse that. I don’t think theism could survive extrapolated cognition.
There is an illusion of transparency here because I do not know what that means. Is that a purely destructive thing, is it supposed to combine destruction with “planting” baby AIs like the one that produced it, or what?
I think it would motivate merging. That’s what happened with biological cells and tribes of humans.
I don’t see why they only trade in your scenario (or would only fight in mine). I don’t see how you would program the individual AI to divide the universe into slices and enforce some rules among individuals. This seems like the standard case of giving a singleton a totally alien value set after which it tiles the universe with smiley faces or equivalent.
I don’t see how it’s directly comparable to creating millions of AIs.
We were talking, among other things, about burning the cosmic commons. It’s an allusion to Hanson.
You cannot assume that the volitions of millions of agents will not include something catastrophically bad for you. “Extrapolated Volition” doesn’t make people nice.
It only takes one.