But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
I mean… this still sounds like total human disempowerment to me? Just because the world is split up between 5 different AI systems doesn’t mean anything good is happening? What does “a single actor controls everything” have to do with AI existential risk? You can just have 4 or 40 or 40 billion AI systems control everything and this is just the same.
I mean… this still sounds like total human disempowerment to me? Just because the world is split up between 5 different AI systems doesn’t mean anything good is happening? What does “a single actor controls everything” have to do with AI existential risk? You can just have 4 or 40 or 40 billion AI systems control everything and this is just the same.