[Question] What long term good futures are possible. (Other than FAI)?

Does anyone know any potential long term futures that are good, and do not involve the creation of a friendly super-intelligence.

To be clear, long term future means billion years+. In most of my world models, we settle into a state from which it is much easier to predict the future within the next few hundred years. (Ie a state where it seems unlikely that anything much will change)

By good, I mean any future that you would prefer to be in if you cared only about yourself, or would be replaced with a robot that would do just as much good here. A weaker condition would be any future that you would want not to be erased from existence.

A superintelligent agent running around doing whatever is friendly or moral or whatever would meet these criteria, I am excluding it because I already know about that possibility. Your futures may contain Superintelligences that aren’t fully friendly. A superintelligence that acts as a ZFC oracle is fine.

Your potential future doesn’t have to be particularly likely, just remotely plausible. You may assume that a random 1% of humanity reads your reply and goes out of their way to make that future happen. Ie people optimizing for this goal can use strategies of the form “someone does X” but not “everyone does X”. You can get “a majority of humans does X” if X is easy to do and explain and most people have no strong reason not to X.

You should make clear what stops somebody making a UFAI which goes on to destroy the world. (Eg paperclip maximizer)

What stops Moloch, what stops us trashing away everything of value in to win competitions? (Hansons Hardscrabble frontier replicators.)