This is a very relevant post for me because I’ve been asking these questions in one form or another for several months. A framework of objective value (FOV) seems to be precluded by physical materialism. However, without it, I cannot see any coherent difference between being happy (or satisfied) because of what is going on in a simulation and what is going on in reality. Since value (that is, our personal, subjective value) isn’t tied to any actual objective good in the universe, it doesn’t matter to our subjective fulfillment if the universe is modified to be ‘better’ (with respect to our POV), a simulation we’re in is modified to be better, or our preferences are modified.
For example, I asked the question several weeks ago here.
When I began to complain (at length...) that without FOV I felt like I was trapped in a machine carrying out instructions to satisfy preferences I neither care about nor am able to abort, it was recommended that I replace my preference for objective value with a preference for subjective value.
If it is true that the only solution to my problem with the non-existence of an FOV is to change my preference—and I’ve already understood that the logical consequence of this is that any kind of preference fulfillment is equivalent to wire-heading—then I’m simply not going to be very sympathetic to objections to wire-heading based on having preferences for not being wire-headed. It’s simply not coherent; there’s no difference.
The desire to be alive, to live in the real universe, and to continue having the same preferences/values is not at all like the desire to feel like our desires have been fulfilled. Our desires are patterns encoded within our brains that correspond to a (hopefully) possible state of reality. If we were to take the two desires/patterns described above and transform them into two strings of bits, the two strings would not be equal. There is an objective difference between them, just as there is an objective difference between Windows and Mac OS.
You seem to believe that because desires are something that can only exist inside a mind, therefore desires can only be about the state of one’s mind. This is false; desires can be about all of reality, of which the state of one mind’s is only a very small part.
You seem to believe that because desires are something that can only exist inside a mind, therefore desires can only be about the state of one’s mind.
I don’t believe this, but I was concerned I would be interpreted this way.
I can have a subjective desire that a cup be objectively filled. I fill it with water, and my desire is objectively satisfied.
The problem I’m describing is that filling the cup is a terminal value with no objective value. I’m not going to drink it, I’m not going to admire how beautiful it is, I just want it filled because that is my desire.
I think that’s useless. Since all the “goodness” is in my subjective preference, I might as well desire that an imaginary cup be filled, or write a story in which an imaginary cup is filled. (You may have trouble relating to filling a cup for no reason being a terminal value, but it is a good example because terminal values are equally objectively useless.)
But let’s consider the example of saving a person from drowning. I understand that the typical preference is to actually save a person from drowning. However, my point is that if I am forced to acknowledge that there is no objective value in saving the person from drowning, then I must admit that my preference to save a person from drowning-actually is no better than a preference to save a person from drowning-virtually. It happens that I have the former preference, but I’m afraid it is incoherent.
The preference to really save a drowning person rather than virtually is better for the person who is drowning.
Of course, best would be for no one to need to be saved from drowning; then you could indulge an interest in virtually saving drowning people for fun as much as you liked without leaving anyone to really drown.
In most of those games the people you are killing are endangering someone. There are some games where you play a bad guy, but in the majority you’re some sort of protector.
Caring about what’s right might be as arbitrary (in some objective sense) as caring about what’s prime, but we do actually happen to care about what’s right.
I must admit that my preference to save a person from drowning-actually is no better than a preference to save a person from drowning-virtually. It happens that I have the former preference, but I’m afraid it is incoherent.
It’s better, because it’s what your preference actually is. There’s nothing incoherent about having the preferences you have. In the end, we value some things just because we value them. An alien with different morality and different preferences might see the things we value as completely random. But they matter to us, because they matter to us.
There is one way that I know of to handle this; I don’t know if you’ll find it satisfactory or not, but it’s the best I’ve found so far. You can go slightly meta and evaluate desires as means instead of as ends, and ask which desires are most useful to have.
Of course, this raises the question “Useful for what?”. Well, one thing desires can be useful for is fulfilling other desires. If I desire that people don’t drown, which causes me to act on that desire by saving people from drowning so they can go on to fulfill whatever desires they happen to have, then my desire than people don’t drown is a useful means for fulfilling other desires. Wanting to stop fake drownings isn’t as useful a desire as wanting to stop actual drownings. And there does seem to be a more-or-less natural reference point against which to evaluate a set of desires: the set of all other desires that actually exist in the real world.
As luck would have it, this method of evaluating desires tends to work tolerably well. For example, the desire held by Clippy, the paperclip maximizer, to maximize the number of paperclips in the universe, doesn’t hold up very well under this standard; relatively few desires that actually exist get fulfilled by maximizing paperclips. A desire to make only the number of paperclips that other people want is a much better desire.
It does make sense. However, what would you make of the objection that it is semi-realist? A first-order realist position would claim that what is desired has objective value, while this represents the more subtle belief that the fulfillment of desire has objective value. I do agree—it is very close to my own original realist position about value. I reasoned that there would be objective (real rather than illusory) value in the fulfillment of the desires of any sentient/valuing being, as some kind of property of their valuing.
“The strength to change what I can, the ability to accept what I can’t, and the wisdom to tell the difference?”
Personally, I prefer the Calvin and Hobbes version: the strength to change what I can, the inability to accept what I can’t, and the incapacity to tell the difference. ;)
This is a very relevant post for me because I’ve been asking these questions in one form or another for several months. A framework of objective value (FOV) seems to be precluded by physical materialism. However, without it, I cannot see any coherent difference between being happy (or satisfied) because of what is going on in a simulation and what is going on in reality. Since value (that is, our personal, subjective value) isn’t tied to any actual objective good in the universe, it doesn’t matter to our subjective fulfillment if the universe is modified to be ‘better’ (with respect to our POV), a simulation we’re in is modified to be better, or our preferences are modified.
For example, I asked the question several weeks ago here.
When I began to complain (at length...) that without FOV I felt like I was trapped in a machine carrying out instructions to satisfy preferences I neither care about nor am able to abort, it was recommended that I replace my preference for objective value with a preference for subjective value.
If it is true that the only solution to my problem with the non-existence of an FOV is to change my preference—and I’ve already understood that the logical consequence of this is that any kind of preference fulfillment is equivalent to wire-heading—then I’m simply not going to be very sympathetic to objections to wire-heading based on having preferences for not being wire-headed. It’s simply not coherent; there’s no difference.
Yes there is.
The desire to be alive, to live in the real universe, and to continue having the same preferences/values is not at all like the desire to feel like our desires have been fulfilled. Our desires are patterns encoded within our brains that correspond to a (hopefully) possible state of reality. If we were to take the two desires/patterns described above and transform them into two strings of bits, the two strings would not be equal. There is an objective difference between them, just as there is an objective difference between Windows and Mac OS.
You seem to believe that because desires are something that can only exist inside a mind, therefore desires can only be about the state of one’s mind. This is false; desires can be about all of reality, of which the state of one mind’s is only a very small part.
I don’t believe this, but I was concerned I would be interpreted this way.
I can have a subjective desire that a cup be objectively filled. I fill it with water, and my desire is objectively satisfied.
The problem I’m describing is that filling the cup is a terminal value with no objective value. I’m not going to drink it, I’m not going to admire how beautiful it is, I just want it filled because that is my desire.
I think that’s useless. Since all the “goodness” is in my subjective preference, I might as well desire that an imaginary cup be filled, or write a story in which an imaginary cup is filled. (You may have trouble relating to filling a cup for no reason being a terminal value, but it is a good example because terminal values are equally objectively useless.)
But let’s consider the example of saving a person from drowning. I understand that the typical preference is to actually save a person from drowning. However, my point is that if I am forced to acknowledge that there is no objective value in saving the person from drowning, then I must admit that my preference to save a person from drowning-actually is no better than a preference to save a person from drowning-virtually. It happens that I have the former preference, but I’m afraid it is incoherent.
The preference to really save a drowning person rather than virtually is better for the person who is drowning.
Of course, best would be for no one to need to be saved from drowning; then you could indulge an interest in virtually saving drowning people for fun as much as you liked without leaving anyone to really drown.
Actually, most games involve virtually killing, rather than virtually saving. I think that says something...
In most of those games the people you are killing are endangering someone. There are some games where you play a bad guy, but in the majority you’re some sort of protector.
Caring about what’s right might be as arbitrary (in some objective sense) as caring about what’s prime, but we do actually happen to care about what’s right.
It’s better, because it’s what your preference actually is. There’s nothing incoherent about having the preferences you have. In the end, we value some things just because we value them. An alien with different morality and different preferences might see the things we value as completely random. But they matter to us, because they matter to us.
There is one way that I know of to handle this; I don’t know if you’ll find it satisfactory or not, but it’s the best I’ve found so far. You can go slightly meta and evaluate desires as means instead of as ends, and ask which desires are most useful to have.
Of course, this raises the question “Useful for what?”. Well, one thing desires can be useful for is fulfilling other desires. If I desire that people don’t drown, which causes me to act on that desire by saving people from drowning so they can go on to fulfill whatever desires they happen to have, then my desire than people don’t drown is a useful means for fulfilling other desires. Wanting to stop fake drownings isn’t as useful a desire as wanting to stop actual drownings. And there does seem to be a more-or-less natural reference point against which to evaluate a set of desires: the set of all other desires that actually exist in the real world.
As luck would have it, this method of evaluating desires tends to work tolerably well. For example, the desire held by Clippy, the paperclip maximizer, to maximize the number of paperclips in the universe, doesn’t hold up very well under this standard; relatively few desires that actually exist get fulfilled by maximizing paperclips. A desire to make only the number of paperclips that other people want is a much better desire.
(I hope that made sense.)
It does make sense. However, what would you make of the objection that it is semi-realist? A first-order realist position would claim that what is desired has objective value, while this represents the more subtle belief that the fulfillment of desire has objective value. I do agree—it is very close to my own original realist position about value. I reasoned that there would be objective (real rather than illusory) value in the fulfillment of the desires of any sentient/valuing being, as some kind of property of their valuing.
Maybe just have a rule that says:
Fulfill preferences when possible.
Change preferences when they are impossible to fulfill.
“The strength to change what I can, the ability to accept what I can’t, and the wisdom to tell the difference?”
Personally, I prefer the Calvin and Hobbes version: the strength to change what I can, the inability to accept what I can’t, and the incapacity to tell the difference. ;)