For what an okayish possible future could look like, I have two stories in mind:
Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.
Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.
A post-AI world where baseline humans are anything more than housecats seems hard to imagine, I’m afraid. And even getting to be housecats at all (rather than dodos) looks to be really difficult.
Make the (!aligned!) AGI solve a list of problems, then end all other AIs, convince (!harmlessly!) all humans to never make another AI, in a way that they will pass down to future humans, then end itself.
I also agree with all of this.
For what an okayish possible future could look like, I have two stories in mind:
Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.
Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.
A post-AI world where baseline humans are anything more than housecats seems hard to imagine, I’m afraid. And even getting to be housecats at all (rather than dodos) looks to be really difficult.
Make the (!aligned!) AGI solve a list of problems, then end all other AIs, convince (!harmlessly!) all humans to never make another AI, in a way that they will pass down to future humans, then end itself.