The future of values

Humans of today control everything. They can decide who gets born and what gets built. So you might think that they would basically get to decide the future. Nevertheless, there are some reasons to doubt this. In one way or another, resources threaten to escape our hands and land in the laps of others, fueling projects we don’t condone, in aid of values we don’t care for.

A big source of such concern is robots. The problem of getting unsupervised strangers to to carry out one’s will, rather than carrying out something almost but quite like one’s will, has eternally plagued everyone with a cent to tempt such a stranger with. There are reasons to suppose the advent of increasingly autonomous robots with potentially arbitrary goals and psychological tendencies will not improve this problem.

If we avoid being immediately trodden on by a suddenly super-superhuman AI with accidentally alien values, you might still expect a vast new labor class of diligent geniuses with exotic priorities would snatch a bit of influence here and there, and eventually do something you didn’t want with the future we employed them to help out with.

The best scenario for human values surviving far into an era artificial intelligences may be the brain emulation scenario. Here the robot minds start out as close replicas of human minds, naturally with the same values. But this seems bound to be short-lived. It would likely be a competitive world, with strong selection pressures. There would be the motivation and technology to muck around with the minds of existing emulations to produce more useful minds. Many changes that would make a person more useful for another person might involve altering that person’s values.

Regardless of robots, it seems humans will have more scope to change humans’ values in the future. Genetic technologies, drugs, and even simple behavioral hacks could alter values. In general, we understand ourselves better over time, and better understanding yields better control. At first it may seem that more control over the values of humans should cause values to stay more fixed. Designer babies could fall much closer to the tree than children traditionally have, so we might hope to pass our wealth and influence along to a more agreeable next generation.

However even if parents could choose their children to perfectly match their own values, selection effects would determine who had how many children – somewhat more strongly than they can now – and humanity’s values would drift over the years. If parents also choose based on other criteria – if they decide that their children could do without their own soft spot for fudge, and would benefit from a stronger work ethic – then values could change very fast. Or genetic engineering may just produce shifts in values as a byproduct. In the past we have had a safety net because every generation is basically the same genetically, and so we can’t erode what is fundamentally human about ourselves. But this could be unravelled.

Even if individual humans maintain the same values, you might expect innovations in institution design to shift the balance of power between them. For instance, what was once an even fight between selfishness and altruism within you could easily be tipped by the rest of the world making things easier for the side of altruism (as they might like to do, if they were either selfish or altruistic).

Even if you have very conservative expectations about the future, you probably face qualitatively similar changes. If things continue exactly as they have for the last thousands of years, your distant descendants’ values will be as strange to you as yours are to your own distant ancestors.

In sum, there is a general problem with the future: we seem likely lose control of a lot of it. And while in principle some technology seems like it should help with this problem, and it could also create an even tougher challenge.

These concerns have often been voiced, and seem plausible to me. But I summarize them mainly because I wanted to ask another question: what kinds of values are likely to lose influence in the future, and what kinds are likely to gain it? (Selfish values? Far mode values? Long term values? Biologically determined values?)

I expect there are many general predictions you could make about this. And as as a critical input into what the future looks like, future values seem like an excellent thing to make predictions about. I have predictions of my own; but before I tell you mine, what are yours?


No comments.