Let Up be the utility function that—somehow—expresses your preferences[1]. Let Uh be the utility function expresses your hedonistic pleasure.
Now imagine an AI is programmed to maximise U(q)=qUp+(1−q)Uh. If we vary q in the range of 5% to 95%, then we will get very different outcomes. At 5%, we will generally be hedonically satisfied, and our preferences will be followed if they don’t cause us to be unhappy. At 95%, we will accomplish any preference that doesn’t cause us huge amounts of misery.
It’s clear that, extrapolated over the whole future of the universe, these could lead to very different outcomes[2]. But—and this is the crucial point—none of these outcomes are really that bad. None of them are the disasters that could happen if we picked a random utility U. So, for all their differences, they reside in the same nebulous category of “yeah, that’s an ok outcome.” Of course, we would have preferences as to where q lies exactly, but few of us would risk the survival of the universe to yank q around within that range.
What happens when we push q towards the edges? Pushing q towards 0 seems a clear disaster: we’re happy, but none of our preferences are respected; we basically don’t matter as agents interacting with the universe any more. Pushing q towards 1might be a disaster: we could end up always miserable, even as our preferences are fully followed. The only thing protecting us from that fate is the fact that our preferences include hedonistic pleasure; but this might not be the case in all circumstances. So moving q to the edges is risky in the way that moving around in the middle is not.
In my research agenda, I talk about adequate outcomes, given a choice of parameters, or acceptable approximations. I mean these terms in the sense of the example above: the outcomes may vary tremendously from one another, given the parameters or the approximation. Nevertheless, all the outcomes avoid disasters and are clearly better than maximising a random utility function.
This being a somewhat naive form of preference utilitarianism, along the lines of “if the human choose it, then its ok”. In particular, you can end up in equilibriums where you are miserable, but unwilling to choose not to be (see for example, some forms of depression).
This fails to be true if preference and hedonism can be maximised independently; eg if we could take an effective happy pill and still follow all our preferences. I’ll focus on the situation where there are true tradeoffs between preference and hedonism.
Very different, very adequate outcomes
Let Up be the utility function that—somehow—expresses your preferences[1]. Let Uh be the utility function expresses your hedonistic pleasure.
Now imagine an AI is programmed to maximise U(q)=qUp+(1−q)Uh. If we vary q in the range of 5% to 95%, then we will get very different outcomes. At 5%, we will generally be hedonically satisfied, and our preferences will be followed if they don’t cause us to be unhappy. At 95%, we will accomplish any preference that doesn’t cause us huge amounts of misery.
It’s clear that, extrapolated over the whole future of the universe, these could lead to very different outcomes[2]. But—and this is the crucial point—none of these outcomes are really that bad. None of them are the disasters that could happen if we picked a random utility U. So, for all their differences, they reside in the same nebulous category of “yeah, that’s an ok outcome.” Of course, we would have preferences as to where q lies exactly, but few of us would risk the survival of the universe to yank q around within that range.
What happens when we push q towards the edges? Pushing q towards 0 seems a clear disaster: we’re happy, but none of our preferences are respected; we basically don’t matter as agents interacting with the universe any more. Pushing q towards 1 might be a disaster: we could end up always miserable, even as our preferences are fully followed. The only thing protecting us from that fate is the fact that our preferences include hedonistic pleasure; but this might not be the case in all circumstances. So moving q to the edges is risky in the way that moving around in the middle is not.
In my research agenda, I talk about adequate outcomes, given a choice of parameters, or acceptable approximations. I mean these terms in the sense of the example above: the outcomes may vary tremendously from one another, given the parameters or the approximation. Nevertheless, all the outcomes avoid disasters and are clearly better than maximising a random utility function.
This being a somewhat naive form of preference utilitarianism, along the lines of “if the human choose it, then its ok”. In particular, you can end up in equilibriums where you are miserable, but unwilling to choose not to be (see for example, some forms of depression).
This fails to be true if preference and hedonism can be maximised independently; eg if we could take an effective happy pill and still follow all our preferences. I’ll focus on the situation where there are true tradeoffs between preference and hedonism.