Considerations on interaction between AI and expected value of the future

Some thoughts about the ‘default’ trajectory of civilisation and how AI will affect the likelihood of different outcomes.

Is the default non-extinction outcome utopic or dystopic?

Arguments for dystopia:

  1. The world seems dystopic in many ways now

  2. We’re getting better at various types of manipulative tactics, i.e. persuasion, marketing, creation of addiction and dependence, harnessing of tribalism. These things cause people’s actions to depart from what would best achieve their ‘true’ values. When this happens a lot, there is no reason the world should move in a direction that people think is broadly good. The things that happen in the world will no longer reflect human values; they will reflect the results of competition for resources and influence taking place between non-human actors (i.e. corporations, political parties), who are able to effectively control human actions.

  3. Most creatures have existed in a dystopic Malthusian struggle, where the size of the population is kept stable by large numbers dying due to lack of resources. This is the state for all wild animals, and was the state for almost all of human history. We should probably consider this the default state. We should view the current state, where there are surplus resources for people to live much more comfortably than ‘barely not dying’ and the population is controlled by our reproductive choices rather than starvation or conflict, as an exception that will probably revert to normal, rather than permanent state.

  4. Altruistic or broadly benevolent values are not the default, they’re pretty recent and rare. The fraction of human history where our moral circles have been wide enough to include humans of other races or nationalities is extremely small. Most current people’s moral circles only patchily include animals- it depends on the exact species and circumstance whether people feel any empathy. It seems like in many traditional societies, killing or harming a member of a rival group was considered not just neutral but good, something virtuous and something to be celebrated.

  5. Technology is making the circumstances of our lives increasingly distant from the sort of settings humans are adapted for. We’ll find ourselves in settings where the environment is too unnatural to successfully trigger happiness, fulfilment, or empathy.

  6. There are positional goods and values that involve people benefiting from the misfortune of others, or from authority and dominion over them. Current people desire various positional goods like relative social position, relative wealth, relative dating success, e.t.c. Historically, values systems which directly value the submission or suffering of others seem common. It seems to have been pretty standard, for most of human history, to have your value system include how many people you’ve conquered or dominated, and a key way to prove (to others or for your own satisfaction) that you’ve successfully conquered someone is to force them to do something they don’t want to do

  7. Selection favours certain kinds of values that are not the values we’d want in a utopia - those with values of dominance, conquering and proliferation will survive and spread more effectively than those with values of pacifism, altruism and benevolence

Arguments for utopia:

  1. All else being equal, most (current, westernish?) people would prefer the world to be generally good for (at least most of) the sentient beings in it. People will generally try to shape the world in a better direction, as long as it’s not too costly to themselves to do this. Technological progress will make it increasingly cheap and easy to ensure all sentient beings have good lives

  2. Life for most people has mostly been getting better, and benevolent values have been becoming more common

  3. Most times when people have claimed civilisation is going downhill they have been wrong

  4. Under many circumstances, having values of pacifism, altruism and benevolence actually does outcompete aggressive and selfish values, because those with cooperative values are better able to coordinate, be trustworthy, and avoid fighting among themselves.

How does AI affect these considerations?

Ways AI can make things worse:

Disrupting utopia arguments:

Affects (1):

  • Disrupts ‘people want thing to happen’ → “thing happens”, by making the world more confusing and difficult to steer, or by resource competition (AI taking resources from humans)

  • Disrupts ‘most people want x → x happens’, by enabling more concentration of power. Currently, society is steered by the aggregate of many people’s values and preferences; this aggregate is more moderate and more reliably benevolent than a random individual’s values. We might lose this property if AI enables ‘single sociopath wants thing to happen → thing happens’.

Disrupts (2+3) if AI is qualitatively different from past technological change and therefore breaks previous patterns

Strengthening dystopia arguments:

(2) AI is likely to make us much better at manipulation—it will allow more intelligently optimised, larger scaled and more personalised targeting of persuasion and other tactics that decouple people’s actions from the things that they ‘really’ value

(4+5) AIs that are moral patients but don’t trigger empathy, or seem like moral patients but are actually not, are going to create murky and confusing ethical territory, increasing the risk of moral catastrophe.

(4+5) AI making the environment more strange and unnatural risks breaking whatever is causing people to have broadly altruistic values

(3+7) AI provides a new and faster-moving ecosystem for selection to take place in (i.e., among individual models or agents, among automated companies, etc), which will increase the strength of this effect relative to other things that influence the trajectory of the world (i.e., that most people don’t want the world to be taken over by whatever corporation is most ruthless). This both increases the probability that the world will be dominated by whichever actor is most ruthless, and increases the probability that we’ll end up in a Malthusian struggle.

(7) AI capabilities increase the influence gap a group can obtain by being more ruthless. If there are more powerful tools on the table to grab, the most grabby people will outcompete others by a larger margin

Ways AI can make things better:

Strengthening utopia arguments

AI is another technology, and as such it will enable humans to better understand and control the world. Scientific progress and economic growth resulting from AI progress will make it cheaper and easier to provide for the needs of sentient beings, and to obtain things we want without harming sentient beings.

Humans overall mostly do things that are in their interests. If we, as a society, develop and deploy an AI capability, that is evidence that the capability does in fact make the world better

Weakening dystopia arguments

(1 + 3) If AI changes the world radically, then maybe current dystopic aspects will disappear. For example, a singleton would eliminate coordination problems, and even a widely trusted advisor would eliminate many coordination problems.

(2) As well as improving manipulation, AI tools can also increase individual people’s ability to find, process, and understand information. AI could vastly improve the quality of education, and therefore people’s judgement and thinking skills. It could improve people’s control of what content they interact with

(4) AI can reduce scarcity and competition, and improve education and availability of information, both of which are likely to increase the frequency of benevolent and altruistic values. AI can help us reflect on and refine our values.