[Question] What long term good futures are possible. (Other than FAI)?

Does any­one know any po­ten­tial long term fu­tures that are good, and do not in­volve the cre­ation of a friendly su­per-in­tel­li­gence.

To be clear, long term fu­ture means billion years+. In most of my world mod­els, we set­tle into a state from which it is much eas­ier to pre­dict the fu­ture within the next few hun­dred years. (Ie a state where it seems un­likely that any­thing much will change)

By good, I mean any fu­ture that you would pre­fer to be in if you cared only about your­self, or would be re­placed with a robot that would do just as much good here. A weaker con­di­tion would be any fu­ture that you would want not to be erased from ex­is­tence.

A su­per­in­tel­li­gent agent run­ning around do­ing what­ever is friendly or moral or what­ever would meet these crite­ria, I am ex­clud­ing it be­cause I already know about that pos­si­bil­ity. Your fu­tures may con­tain Su­per­in­tel­li­gences that aren’t fully friendly. A su­per­in­tel­li­gence that acts as a ZFC or­a­cle is fine.

Your po­ten­tial fu­ture doesn’t have to be par­tic­u­larly likely, just re­motely plau­si­ble. You may as­sume that a ran­dom 1% of hu­man­ity reads your re­ply and goes out of their way to make that fu­ture hap­pen. Ie peo­ple op­ti­miz­ing for this goal can use strate­gies of the form “some­one does X” but not “ev­ery­one does X”. You can get “a ma­jor­ity of hu­mans does X” if X is easy to do and ex­plain and most peo­ple have no strong rea­son not to X.

You should make clear what stops some­body mak­ing a UFAI which goes on to de­stroy the world. (Eg pa­per­clip max­i­mizer)

What stops Moloch, what stops us trash­ing away ev­ery­thing of value in to win com­pe­ti­tions? (Han­sons Hard­scrab­ble fron­tier repli­ca­tors.)