Your post still leaves the possibility that “quality of life”, “positive emotions” or “meaningfulness” are objectively existing variables, and people differ only in their weighting. But I think the problem might be worse than that. See this old comment by Wei:
let’s say it models the world as a 2D grid of cells that have intrinsic color, it always predicts that any blue cell that it shoots at will turn some other color, and its utility function assigns negative utility to the existence of blue cells. What does this robot “actually want”, given that the world is not really a 2D grid of cells that have intrinsic color?
In human terms, let’s say you care about the total amount of happiness in the universe. Also let’s say, for the sake of argument, that there’s no such thing as total amount of happiness in the universe. What do you care about then?
See Eliezer’s Rescuing the utility function for a longer treatment of this topic. I spent some time mining ideas from there, but still can’t say I understand them all.
To me this looks like a knockdown argument to any non-solipsistic morality. I really do just care about my qualia.
In some sense it’s the same mistake the deontologists make, on a deeper level. A lot their proposed rules strike me as heavily correlated with happiness. How were these rules ever generated? Whatever process generated them must have been a consequentialist process.
If deontology is just applied consequentialism, then maybe “happiness” is just applied “0x7fff5694dc58″.
Your post still leaves the possibility that “quality of life”, “positive emotions” or “meaningfulness” are objectively existing variables, and people differ only in their weighting. But I think the problem might be worse than that.
I think this makes the problem less bad, because if you get people to go up their chain of justification, they will all end up at the same point. I think that point is just predictions of the valence of their qualia.
It’s not. You may only care about your qualia, but I care about more than just my qualia. Perhaps what exactly I care about is not well-defined, but sure as shit my behavior is best modelled and explained as trying to achieve something in the world outside of my mind. Nozick’s experience machine argument shows all this. There’s also a good post by Nate Soares on the subject IIRC.
Your post still leaves the possibility that “quality of life”, “positive emotions” or “meaningfulness” are objectively existing variables, and people differ only in their weighting. But I think the problem might be worse than that. See this old comment by Wei:
In human terms, let’s say you care about the total amount of happiness in the universe. Also let’s say, for the sake of argument, that there’s no such thing as total amount of happiness in the universe. What do you care about then?
See Eliezer’s Rescuing the utility function for a longer treatment of this topic. I spent some time mining ideas from there, but still can’t say I understand them all.
To me this looks like a knockdown argument to any non-solipsistic morality. I really do just care about my qualia.
In some sense it’s the same mistake the deontologists make, on a deeper level. A lot their proposed rules strike me as heavily correlated with happiness. How were these rules ever generated? Whatever process generated them must have been a consequentialist process.
If deontology is just applied consequentialism, then maybe “happiness” is just applied “0x7fff5694dc58″.
I think this makes the problem less bad, because if you get people to go up their chain of justification, they will all end up at the same point. I think that point is just predictions of the valence of their qualia.
It’s not. You may only care about your qualia, but I care about more than just my qualia. Perhaps what exactly I care about is not well-defined, but sure as shit my behavior is best modelled and explained as trying to achieve something in the world outside of my mind. Nozick’s experience machine argument shows all this. There’s also a good post by Nate Soares on the subject IIRC.