The (1)-(3) is a characterization of the possible relationships between humanity and AI, conditional on both existing and interacting, rather than an attempted taxonomy of all possible futures. I think (4) is a special case of (3), and (5) and (6) are things that could happen to both parties, not relationships between them, so they’re not within the scope of what I’m trying to characterize.
The question I’m asking is: given that we are building these systems and we will coexist with them if we do have any future, what does the space of stable arrangements look like? I claim (a) that space is pretty tightly bounded and (b) the part almost nobody is working on is understanding the other party well enough to make anything (a)-shaped to work.
The (1)-(3) is a characterization of the possible relationships between humanity and AI, conditional on both existing and interacting, rather than an attempted taxonomy of all possible futures. I think (4) is a special case of (3), and (5) and (6) are things that could happen to both parties, not relationships between them, so they’re not within the scope of what I’m trying to characterize.
The question I’m asking is: given that we are building these systems and we will coexist with them if we do have any future, what does the space of stable arrangements look like? I claim (a) that space is pretty tightly bounded and (b) the part almost nobody is working on is understanding the other party well enough to make anything (a)-shaped to work.