I think there’s tremendous value in imagining what the future could look like, in as much detail as possible. Some recent examples: AI-2027, A History of the Future, How AI Takeover Might Happen in 2 Years.
Do you know any optimistic stories along these lines? I’m not really looking for technical descriptions of how the alignment problem got solved; instead, I’m interested in examples of how people envision the path or process by which we reached a positive, realistic, post-singularity future.
I have real trouble picturing this myself, and I find that a bit disturbing. It feels like sailing across an unknown sea—it’s good to avoid rocks, reefs, and sea monsters, but if you don’t have any clear idea of what your destination looks like, you might still be steering in entirely the wrong direction. And what could that direction be? Well, suppose you have 10 different optimistic future scenarios, and they all share some common feature. Maybe this is the thing that really matters?
To clarify, I’m not just looking for descriptions of utopias. The key question for me is: how did we actually get there?
https://neuromorph365105.substack.com/p/the-ai-academy
This is actually my primary focus. I believe it can be done through a complicated process that targets human psychology, but to explain it simply
- Spread satisfaction & end suffering.
- Spread rational decision-making
To further simplify, if everyone was like us, and no one was on the chopping block if AGI doesn’t get created, then the incentive to create AGI seizes and we effectively secure decades for AI-safety efforts.
This is a post I made on the subject.
https://www.lesswrong.com/posts/GzMteAGbf8h5oWkow/breaking-beliefs-about-saving-the-world