The basic space of possible long-term futures between humans and advanced AI is simple: (1) humans retain full control, (2) AIs assert full control, or (3) humans and AIs share control.
I mean, also (4) endless shifting control between various factions, because neither “humans” nor “ai” are long-term stable unitary categories, and (5) aliens and (6) neither—intelligence is just sparse across space-time, and (7) …
I think this exploration hinges a lot on a few under-defined terms. What is “control”, how is it measured, and what is “identity” for controlling or controlled things. As an example of my confusion: humans are a very recent species on earth, and techno-humans just a blip. Are ants disempowered in any meaningful (to them) sense?
The (1)-(3) is a characterization of the possible relationships between humanity and AI, conditional on both existing and interacting, rather than an attempted taxonomy of all possible futures. I think (4) is a special case of (3), and (5) and (6) are things that could happen to both parties, not relationships between them, so they’re not within the scope of what I’m trying to characterize.
The question I’m asking is: given that we are building these systems and we will coexist with them if we do have any future, what does the space of stable arrangements look like? I claim (a) that space is pretty tightly bounded and (b) the part almost nobody is working on is understanding the other party well enough to make anything (a)-shaped to work.
I mean, also (4) endless shifting control between various factions, because neither “humans” nor “ai” are long-term stable unitary categories, and (5) aliens and (6) neither—intelligence is just sparse across space-time, and (7) …
I think this exploration hinges a lot on a few under-defined terms. What is “control”, how is it measured, and what is “identity” for controlling or controlled things. As an example of my confusion: humans are a very recent species on earth, and techno-humans just a blip. Are ants disempowered in any meaningful (to them) sense?
The (1)-(3) is a characterization of the possible relationships between humanity and AI, conditional on both existing and interacting, rather than an attempted taxonomy of all possible futures. I think (4) is a special case of (3), and (5) and (6) are things that could happen to both parties, not relationships between them, so they’re not within the scope of what I’m trying to characterize.
The question I’m asking is: given that we are building these systems and we will coexist with them if we do have any future, what does the space of stable arrangements look like? I claim (a) that space is pretty tightly bounded and (b) the part almost nobody is working on is understanding the other party well enough to make anything (a)-shaped to work.