If we succeed at AI safety, humans will probably decide the future of the universe
There are two levels of success: humanity gets to live, and humanity keeps control of the future. An AGI as aligned as a human has a decent chance of giving the currently alive humans either boon. A pseudokind AGI that’s not otherwise very aligned only allows humanity to live, but keeps the future.
(I have Yudkowskian doom levels for losing control of the future, assuming there is no decades-long pause to figure things out, but significantly less for everyone ending up dead.)
There are two levels of success: humanity gets to live, and humanity keeps control of the future. An AGI as aligned as a human has a decent chance of giving the currently alive humans either boon. A pseudokind AGI that’s not otherwise very aligned only allows humanity to live, but keeps the future.
(I have Yudkowskian doom levels for losing control of the future, assuming there is no decades-long pause to figure things out, but significantly less for everyone ending up dead.)