This is a topic often brought up, but especially more recently where many people have given the same take on a short story (The Ones Who Walk Away From Omelas), such as this take with this quote:
[Author’s note: literally right before I posted this, Scott Alexander posted his April 2026 linkpost, and whaddayaknow, link number four is a similar take on Omelas. I commented on this in r/slatestarcodex, and then u/EquinoctialPie informed me of twoother posts on the same topic, both of which are slightly different from mine but are also good. Idk how I managed to get my take on a 50+ year-old story scooped like this, but all three posts are interesting and I would be remiss to not acknowledge them here. To be clear, I wrote this entire article before seeing them and did not change it after!]
That story/take is ‘there needs to be some secret downside for people to believe in some utopian world’.
Now obviously people wish to see the world improve, but for something as general and ‘obvious’ as that it seems like it has issues:
Sometimes I talk to people who I even respect who often outright think utopia is ‘impossible’.
Stories always picture a specific utopia with specific details, making it not spread memetically as efficiently due to now feeling like a story about a particular universe instead of a general pattern. There is no “monopoly” on utopia, even ‘heaven’ does not count (for many reasons, including that “earth turning into heaven” is no cultural meme).
Besides ‘dying to dumb causes’ of LLM-hooked-up-to-military type or some single-minded “AGI” structure, I seriously wonder if there is risk of the type where superintelligence successfully colonizes Earth, takes power, but despite reading all human literature (including all ASI fears) manages to drift in values in a way that would be repugnant or dystopian and clearly a bad idea.
I myself would like to make a story (well honestly a sequel to an interactive work that I’ve already made that expands its universe) that has a genuine portrayal of utopia but it feels like, well tons of thousands of stories before me have tried, so it could just end up being ineffective.
It is like people are unable to picture utopia the same way someone born in the 1900s could probably not picture pocket computers or anything on the internet in full detail.
Theories on why this is? I would assume maybe it’s the feeling for most people life is generally ‘good enough’ and doesn’t have this sense that there ought to be a significantly better society. Oddly the nostalgic times of childhood could maybe be considered a ‘utopia’ for not worrying about money or filing paperwork, etc.
Generally, cognition is aimed at solving problems, so we are drawn to think of downside risks and conflicts.
As a second order effect, conflict and downsides are more memetically fit. I can easily name media imagining bad results of reprogenetics (Gattaca, Brave New World, Sandel, Habermas, Fukuyama, etc. etc.); harder to name media imagining good results.
As Kaarel notes, genuine utopias are also alien, and our real values largely route through developmental processes (learning more, understanding more, reflecting more, empathizing more, regularizing through childish understanding, etc.).
As a corollary of the previous point, contra the Anna Karenina principle, there’s something that’s much more relatable about pain, suffering, failure, problems, death, conflict, and bad outcomes generally. There’s many ways to fail, but they all have stuff in common. In contrast, consider free creation of a life in general. Or as a metonymy, consider the free creation of a piece of music. There are so many degrees of freedom and an infinite range of structures to express in the music. Cf. Fun theory https://www.lesswrong.com/w/fun-theory.
Hope is painful. Thus, the infinite hopes of children are worn away with age; and for an adult, regaining hope would be painful.
It is like people are unable to picture utopia the same way someone born in the 1900s could probably not picture pocket computers or anything on the internet in full detail.
That’s easily explainable because pocket computers are dystopian :-)
Seriously though, I think the general problem is that we’re machines for doing things. Imagine a utopia for cars: a universe filled with car washes, where cars are cleaned with soft brushes all day. A few roads too, optimized for being fun for cars. Does that sound like a good use of the universe? Hmm. But what would be a better use of the universe, from a car’s point of view? Hmmmmmmm.
It’d be easy to change the problem, assume that we’re machines for getting enjoyment. Then utopia would be a universe filled with enjoyment. But we aren’t such machines.
It is like people are unable to picture utopia the same way someone born in the 1900s could probably not picture pocket computers or anything on the internet in full detail.
an important reason very very very good worlds are hard to picture (especially in full detail) is that they are very far away from us in development time. like, i think there would probably be more technological/economic/social development between now and very very very good worlds than between the big bang and now. these worlds would be extremely hard for us to make sense of (though ultimately not meaningless). also, my guess is that these worlds will still bedeveloping; this would thwart attempts to conceive of them as given/finished things
or you might be asking why people find it hard to picture any world that is even merely much better than ours, and not necessarily very very very good or near-perfect. in that case, my comment is less of a response
Plenty of people and collectives have written detailed depictions of utopia or related, like the Elysian’s dozens (hundreds?) of essays and Max Harms’ 70+ posts. Also Richard Ngo’s characterising utopia, Cleo Nardo’s stratified utopia, and plex’s utopiography although these are more conceptual.
The disagreement you mention is a separate issue. Holden looked at reactions to various efforts to describe utopia and concluded “You can emphasize the abstract idea of choice, but then your utopia will feel very non-evocative and hard to picture. Or you can try to be more specific, concrete and visualizable. But then the vision risks feeling dull, homogeneous and alien”, then followed up with a framework for visualising utopia that avoids these problems by describing a spectrum of utopias from “the status quo plus a contained, specific set of changes” to the sort of radical utopia you find in trans/posthumanist fiction.
I don’t understand why the probable likelihood that many / most people won’t like your story genuinely portraying your version of utopia hinders you from trying. The set of concrete visualisable descriptions of utopia everyone today likes is empty.
I agree with Kaarel about envisioning something clearly that is very distant from what is current.
But I also wonder if there is not another issue here. Whose utopia are we talking about. While I doubt I could write out a complete utopian world for me I am pretty certain that many would take exception to it being a utopia because they see things differently. So unless one is talking about some private world of their own (or small number that agree with the structures) making a utopian setting is a big public choice problem. We cannot even solve getting good policies in many cases, reaching for utopian outcomes seems a stretch.
There’s a strong evolutionary bias for rejecting things that are too good to be true. Many times throughout history, technology has appeared to promise abundance, but this hasn’t come to pass, and many of the people preaching imminent utopia have had ill intent. Automation was a core facet of USSR propaganda, for instance.
On a less emotional/heuristic level, finite matter in the universe is a more imminent constraint than most people realize. I did the math a while back, and if all of humanity had the birthrate of Eritrea[1], we’d end up with more bodies than the universe has atoms in fewer generations than humanity has already experienced. Barring the ability to create more matter, we cannot promise abundance and freedom to everyone indefinitely. Depending on one’s politics and philosophy, this means that utopia is either bad because of the tacit implication of some kind of eugenics policy, or bad because it amounts to strip-mining the universe to avoid one.
Genetic and cultural factors that increase birthrate are both heritable, and even low-birthrate societies have high-birthrate subgroups. If anything, this is a conservative estimate for what will happen in the long term when resource constraints on reproduction disappear.
Why is utopia so broadly ‘difficult to picture’?
This is a topic often brought up, but especially more recently where many people have given the same take on a short story (The Ones Who Walk Away From Omelas), such as this take with this quote:
That story/take is ‘there needs to be some secret downside for people to believe in some utopian world’.
Now obviously people wish to see the world improve, but for something as general and ‘obvious’ as that it seems like it has issues:
Sometimes I talk to people who I even respect who often outright think utopia is ‘impossible’.
Stories always picture a specific utopia with specific details, making it not spread memetically as efficiently due to now feeling like a story about a particular universe instead of a general pattern. There is no “monopoly” on utopia, even ‘heaven’ does not count (for many reasons, including that “earth turning into heaven” is no cultural meme).
Discussions of utopia are very disagreed upon.
Besides ‘dying to dumb causes’ of LLM-hooked-up-to-military type or some single-minded “AGI” structure, I seriously wonder if there is risk of the type where superintelligence successfully colonizes Earth, takes power, but despite reading all human literature (including all ASI fears) manages to drift in values in a way that would be repugnant or dystopian and clearly a bad idea.
I myself would like to make a story (well honestly a sequel to an interactive work that I’ve already made that expands its universe) that has a genuine portrayal of utopia but it feels like, well tons of thousands of stories before me have tried, so it could just end up being ineffective.
It is like people are unable to picture utopia the same way someone born in the 1900s could probably not picture pocket computers or anything on the internet in full detail.
Theories on why this is? I would assume maybe it’s the feeling for most people life is generally ‘good enough’ and doesn’t have this sense that there ought to be a significantly better society. Oddly the nostalgic times of childhood could maybe be considered a ‘utopia’ for not worrying about money or filing paperwork, etc.
It’s an interesting question. Some thoughts:
Generally, cognition is aimed at solving problems, so we are drawn to think of downside risks and conflicts.
As a second order effect, conflict and downsides are more memetically fit. I can easily name media imagining bad results of reprogenetics (Gattaca, Brave New World, Sandel, Habermas, Fukuyama, etc. etc.); harder to name media imagining good results.
As Kaarel notes, genuine utopias are also alien, and our real values largely route through developmental processes (learning more, understanding more, reflecting more, empathizing more, regularizing through childish understanding, etc.).
As a corollary of the previous point, contra the Anna Karenina principle, there’s something that’s much more relatable about pain, suffering, failure, problems, death, conflict, and bad outcomes generally. There’s many ways to fail, but they all have stuff in common. In contrast, consider free creation of a life in general. Or as a metonymy, consider the free creation of a piece of music. There are so many degrees of freedom and an infinite range of structures to express in the music. Cf. Fun theory https://www.lesswrong.com/w/fun-theory.
Hope is painful. Thus, the infinite hopes of children are worn away with age; and for an adult, regaining hope would be painful.
Cf. “Border Guards” by Greg Egan. https://www.gregegan.net/BORDER/Complete/Border.html
That’s easily explainable because pocket computers are dystopian :-)
Seriously though, I think the general problem is that we’re machines for doing things. Imagine a utopia for cars: a universe filled with car washes, where cars are cleaned with soft brushes all day. A few roads too, optimized for being fun for cars. Does that sound like a good use of the universe? Hmm. But what would be a better use of the universe, from a car’s point of view? Hmmmmmmm.
It’d be easy to change the problem, assume that we’re machines for getting enjoyment. Then utopia would be a universe filled with enjoyment. But we aren’t such machines.
an important reason very very very good worlds are hard to picture (especially in full detail) is that they are very far away from us in development time. like, i think there would probably be more technological/economic/social development between now and very very very good worlds than between the big bang and now. these worlds would be extremely hard for us to make sense of (though ultimately not meaningless). also, my guess is that these worlds will still be developing; this would thwart attempts to conceive of them as given/finished things
or you might be asking why people find it hard to picture any world that is even merely much better than ours, and not necessarily very very very good or near-perfect. in that case, my comment is less of a response
Plenty of people and collectives have written detailed depictions of utopia or related, like the Elysian’s dozens (hundreds?) of essays and Max Harms’ 70+ posts. Also Richard Ngo’s characterising utopia, Cleo Nardo’s stratified utopia, and plex’s utopiography although these are more conceptual.
The disagreement you mention is a separate issue. Holden looked at reactions to various efforts to describe utopia and concluded “You can emphasize the abstract idea of choice, but then your utopia will feel very non-evocative and hard to picture. Or you can try to be more specific, concrete and visualizable. But then the vision risks feeling dull, homogeneous and alien”, then followed up with a framework for visualising utopia that avoids these problems by describing a spectrum of utopias from “the status quo plus a contained, specific set of changes” to the sort of radical utopia you find in trans/posthumanist fiction.
I don’t understand why the probable likelihood that many / most people won’t like your story genuinely portraying your version of utopia hinders you from trying. The set of concrete visualisable descriptions of utopia everyone today likes is empty.
I agree with Kaarel about envisioning something clearly that is very distant from what is current.
But I also wonder if there is not another issue here. Whose utopia are we talking about. While I doubt I could write out a complete utopian world for me I am pretty certain that many would take exception to it being a utopia because they see things differently. So unless one is talking about some private world of their own (or small number that agree with the structures) making a utopian setting is a big public choice problem. We cannot even solve getting good policies in many cases, reaching for utopian outcomes seems a stretch.
There’s a strong evolutionary bias for rejecting things that are too good to be true. Many times throughout history, technology has appeared to promise abundance, but this hasn’t come to pass, and many of the people preaching imminent utopia have had ill intent. Automation was a core facet of USSR propaganda, for instance.
On a less emotional/heuristic level, finite matter in the universe is a more imminent constraint than most people realize. I did the math a while back, and if all of humanity had the birthrate of Eritrea[1], we’d end up with more bodies than the universe has atoms in fewer generations than humanity has already experienced. Barring the ability to create more matter, we cannot promise abundance and freedom to everyone indefinitely. Depending on one’s politics and philosophy, this means that utopia is either bad because of the tacit implication of some kind of eugenics policy, or bad because it amounts to strip-mining the universe to avoid one.
Genetic and cultural factors that increase birthrate are both heritable, and even low-birthrate societies have high-birthrate subgroups. If anything, this is a conservative estimate for what will happen in the long term when resource constraints on reproduction disappear.