Book review: Deep Utopia: Life and Meaning in a Solved World, by Nick
Bostrom.
Bostrom’s previous book,
Superintelligence,
triggered expressions of concern. In his latest work, he describes his
hopes for the distant future, presumably to limit the risk that fear of
AI will lead to a The Butlerian
Jihad-like
scenario.
While Bostrom is relatively cautious about endorsing specific features
of a utopia, he clearly expresses his dissatisfaction with the current
state of the world. For instance, in a footnoted rant about preserving
nature, he writes:
Imagine that some technologically advanced civilization arrived on
Earth … Imagine they said: “The most important thing is to
preserve the ecosystem in its natural splendor. In particular, the
predator populations must be preserved: the psychopath killers, the
fascist goons, the despotic death squads … What a tragedy if this
rich natural diversity were replaced with a monoculture of healthy,
happy, well-fed people living in peace and harmony.” … this would
be appallingly callous.
The book begins as if addressing a broad audience, then drifts into
philosophy that seems obscure, leading me to wonder if it’s intended as
a parody of aimless academic philosophy.
Future Technology
Bostrom focuses on technological rather than political forces that might
enable utopia. He cites the example of people with congenital
analgesia,
who live without pain but often face health issues. This dilemma could
be mitigated by designing a safer environment.
Bostrom emphasizes more ambitious options:
But another approach would be to create a mechanism that serves the
same function as pain without being painful. Imagine an “exoskin”: a
layer of nanotech sensors so thin that we can’t feel it or see it,
but which monitors our skin surface for noxious stimuli. If we put our
hand on a hot plate, … the mechanism contracts our muscle fibers so
as to make our hand withdraw
Mass Unemployment
As technology surpasses human abilities at most tasks, we may eventually
face a post-work society. Hiring humans would become less appealing when
robots can better understand tasks, work faster, and cost less than
feeding a human worker.
That conclusion shouldn’t be controversial given Bostrom’s assumptions
about technology. Unfortunately, those assumptions about technology are
highly controversial, at least among people who haven’t paid close
attention to trends in AI capabilities.
The stereotype of unemployment is that it’s a sign of failure. But
Bostrom points to neglected counterexamples, such as retirement and the
absence of child labor. Reframing technological unemployment in this
light makes it appear less disturbing. Just as someone in 1800 might
have struggled to imagine the leisure enjoyed by children and retirees
today, we may have difficulty envisioning a future of mass leisure.
If things go well, income from capital and land could provide luxury for
all.
Bostrom notes that it’s unclear whether automating most, but not all,
human labor will increase or decrease wages. The dramatic changes from
mass unemployment might occur much later than the automation of most
current job tasks.
Post-Instrumentality Purpose
Many challenges that motivate human action can, in principle, be
automated. Given enough time, machines could outperform humans in these
tasks.
What will be left for humans to care about accomplishing? To a first
approximation, that eliminates what currently gives our lives purpose.
Bostrom calls this the age of post-instrumentality.
Much of the book describes how social interactions could provide
adequate sources of purpose.
He briefly mentions that some Eastern cultures discourage attachment to
purpose, which seems like a stronger argument than his main points.
It’s unclear why he treats this as a minor detail.
Bostrom asks his question about people pretty close to him, leftist
academics in rich Western societies.
If I live millions of years, I expect that I’ll experience large
changes in how I feel about having a purpose to guide my life.
Bostrom appears too focused on satisfying the values reflecting the
current culture of those debating utopia and AI. These values mostly
represent adaptations to recent
conditions.
The patterns of cultural and value changes suggest we’re far from
achieving a stable form of culture that will satisfy most people.
Bostrom seems to target critics whose arguments often amount to proof by
failure of imagination. Their true objections might be:
Arrogant beliefs that their culture has found the One True Moral
System, so any culture adapting to drastically different conditions
will be unethical.
Fear of change: critics belonging to an elite that knows how to
succeed under current conditions may be unable to predict whether
they’ll retain their elite status in a utopia.
The book also dedicates many pages to interestingness, asking whether
long lifespans and astronomical population sizes will exhaust
opportunities to be interesting. This convinced me of my confusion
regarding what interestingness I value.
Malthus
A large cloud on the distant horizon is the pressure to increase
population to the point where per capita wealth is driven back down to
non-utopian levels.
We can solve this by … um, waiting for his next book to explain how?
Or by cooperating? Why did he bring up Malthus and then leave us with
too little analysis to guess whether there’s a good answer?
To be clear, I don’t consider Malthus to provide a strong
argument
against utopia. My main complaint is that Bostrom leaves readers
confused as to how uncomfortable the relevant trade-offs will be.
Style
The book’s style sometimes seems more novel than the substance. Bostrom
is the wrong person to pioneer innovation in writing styles.
The substance is valuable enough to deserve a wider audience. Parts of
the book attempt to appeal to a broad readership, but the core content
is mostly written in a style aimed at professional philosophers.
Nearly all readers will find the book too long. The sections (chapters?)
titled Tuesday and Wednesday contain the most valuable ideas, so maybe
read just those.
Concluding Thoughts
Bostrom offers little reassurance that we can safely navigate to such a
utopia. However, it’s challenging to steer in that direction if we only
focus on dystopias to avoid. A compelling vision of a heavenly distant
future could help us balance risks and rewards. While Bostrom provides
an intellectual vision that should encourage us, it falls short
emotionally.
Bostrom’s utopia is technically feasible. Are we wise enough to create
it? Bostrom has no answer.
Many readers will reject the book because it relies on technology too
far from what we’re familiar with. I don’t expect those readers to say
much beyond “I can’t imagine …”. I have little respect for such
reactions.
A variety of other readers will object to Bostrom’s intuitions about
what humans will want. These are the important objections to consider.
P.S. While writing this review, I learned that Bostrom’s Future of
Humanity Institute has shut down for unclear reasons, seemingly related
to friction with Oxford’s philosophy department. This deserves further
discussion, but I’m unsure what to say. The book’s strangely confusing
ending, where a fictional dean unexpectedly halts Bostrom’s talk,
appears to reference this situation, but the message is too cryptic for
me to decipher.
Book review: Deep Utopia
Link post
Book review: Deep Utopia: Life and Meaning in a Solved World, by Nick Bostrom.
Bostrom’s previous book, Superintelligence, triggered expressions of concern. In his latest work, he describes his hopes for the distant future, presumably to limit the risk that fear of AI will lead to a The Butlerian Jihad-like scenario.
While Bostrom is relatively cautious about endorsing specific features of a utopia, he clearly expresses his dissatisfaction with the current state of the world. For instance, in a footnoted rant about preserving nature, he writes:
The book begins as if addressing a broad audience, then drifts into philosophy that seems obscure, leading me to wonder if it’s intended as a parody of aimless academic philosophy.
Future Technology
Bostrom focuses on technological rather than political forces that might enable utopia. He cites the example of people with congenital analgesia, who live without pain but often face health issues. This dilemma could be mitigated by designing a safer environment.
Bostrom emphasizes more ambitious options:
Mass Unemployment
As technology surpasses human abilities at most tasks, we may eventually face a post-work society. Hiring humans would become less appealing when robots can better understand tasks, work faster, and cost less than feeding a human worker.
That conclusion shouldn’t be controversial given Bostrom’s assumptions about technology. Unfortunately, those assumptions about technology are highly controversial, at least among people who haven’t paid close attention to trends in AI capabilities.
The stereotype of unemployment is that it’s a sign of failure. But Bostrom points to neglected counterexamples, such as retirement and the absence of child labor. Reframing technological unemployment in this light makes it appear less disturbing. Just as someone in 1800 might have struggled to imagine the leisure enjoyed by children and retirees today, we may have difficulty envisioning a future of mass leisure.
If things go well, income from capital and land could provide luxury for all.
Bostrom notes that it’s unclear whether automating most, but not all, human labor will increase or decrease wages. The dramatic changes from mass unemployment might occur much later than the automation of most current job tasks.
Post-Instrumentality Purpose
Many challenges that motivate human action can, in principle, be automated. Given enough time, machines could outperform humans in these tasks.
What will be left for humans to care about accomplishing? To a first approximation, that eliminates what currently gives our lives purpose. Bostrom calls this the age of post-instrumentality.
Much of the book describes how social interactions could provide adequate sources of purpose.
He briefly mentions that some Eastern cultures discourage attachment to purpose, which seems like a stronger argument than his main points. It’s unclear why he treats this as a minor detail.
As Robin Hanson puts it:
If I live millions of years, I expect that I’ll experience large changes in how I feel about having a purpose to guide my life.
Bostrom appears too focused on satisfying the values reflecting the current culture of those debating utopia and AI. These values mostly represent adaptations to recent conditions. The patterns of cultural and value changes suggest we’re far from achieving a stable form of culture that will satisfy most people.
Bostrom seems to target critics whose arguments often amount to proof by failure of imagination. Their true objections might be:
Arrogant beliefs that their culture has found the One True Moral System, so any culture adapting to drastically different conditions will be unethical.
Fear of change: critics belonging to an elite that knows how to succeed under current conditions may be unable to predict whether they’ll retain their elite status in a utopia.
The book also dedicates many pages to interestingness, asking whether long lifespans and astronomical population sizes will exhaust opportunities to be interesting. This convinced me of my confusion regarding what interestingness I value.
Malthus
A large cloud on the distant horizon is the pressure to increase population to the point where per capita wealth is driven back down to non-utopian levels.
We can solve this by … um, waiting for his next book to explain how? Or by cooperating? Why did he bring up Malthus and then leave us with too little analysis to guess whether there’s a good answer?
To be clear, I don’t consider Malthus to provide a strong argument against utopia. My main complaint is that Bostrom leaves readers confused as to how uncomfortable the relevant trade-offs will be.
Style
The book’s style sometimes seems more novel than the substance. Bostrom is the wrong person to pioneer innovation in writing styles.
The substance is valuable enough to deserve a wider audience. Parts of the book attempt to appeal to a broad readership, but the core content is mostly written in a style aimed at professional philosophers.
Nearly all readers will find the book too long. The sections (chapters?) titled Tuesday and Wednesday contain the most valuable ideas, so maybe read just those.
Concluding Thoughts
Bostrom offers little reassurance that we can safely navigate to such a utopia. However, it’s challenging to steer in that direction if we only focus on dystopias to avoid. A compelling vision of a heavenly distant future could help us balance risks and rewards. While Bostrom provides an intellectual vision that should encourage us, it falls short emotionally.
Bostrom’s utopia is technically feasible. Are we wise enough to create it? Bostrom has no answer.
Many readers will reject the book because it relies on technology too far from what we’re familiar with. I don’t expect those readers to say much beyond “I can’t imagine …”. I have little respect for such reactions.
A variety of other readers will object to Bostrom’s intuitions about what humans will want. These are the important objections to consider.
I’ll rate this book 3.5 on the nonstandard scale used for this Manifold bounty.
P.S. While writing this review, I learned that Bostrom’s Future of Humanity Institute has shut down for unclear reasons, seemingly related to friction with Oxford’s philosophy department. This deserves further discussion, but I’m unsure what to say. The book’s strangely confusing ending, where a fictional dean unexpectedly halts Bostrom’s talk, appears to reference this situation, but the message is too cryptic for me to decipher.