I’ve now read your linked posts, but can’t derive from them how you would answer my questions. Do you want to take a direct shot at answering them? And also the following question/counter-argument?
Think about the consequences, what will actually happen down the line and how well your Values will actually be satisfied long-term, not just about what feels yummy in the moment.
Suppose I’m a sadist who derives a lot of pleasure/reward from torturing animals, but also my parents and everyone else in society taught me that torturing animals is wrong. According to your posts, this implies that my Values = “torturing animals has high value”, and Goodness = “don’t torturing animals”, and I shouldn’t follow Goodness unless it actually lets me better satisfy my values better long-term, in other words allows me to torture more animals in the long run. Am I understanding your ideas correctly?
Assuming I am understanding you correctly, this would be a controversial position to say the least, and counter to many people’s intuitions or metaethical beliefs. I think metaethics is a hard problem, and I probably can’t easily convince you that you’re wrong. But maybe I can at least convince you that you shouldn’t be as confident in these ideas as you appear to be, nor present them to “lower-level readers” without indicating how controversial / counterintuitive-to-many the implications of your ideas are.
Are our Values the real-world things that trigger our feelings, or the feelings themselves? (If the latter, we’ll be able to artificially trigger them at negligible cost and with no negative side effects, unlike today.)
Not quite either of those, but if we’re speaking loosely then the real-world things that trigger our feelings. Definitely not the feelings themselves.
“We Don’t Get To Choose Our Own Values” will be false, so that part will be irrelevant. How does this affect your arguments/conclusions?
It’s already false today for things like e.g. heroin; drugs already make it possible to overwrite our values if we so choose. I would reason about future opportunities to overwrite our values in much the same way I reason about heroin today (and in much the same way which I think most people reason about heroin today).
Even today, Goodness-as-memetic-egregore can (and have) heavily influence our Values, through the kind of mechanism described in Morality is Scary. (Think of the Communists who yearned for communism so much that they were willing to endure extreme hardship and even torture for it.) This seems like a crucial part of the picture that you didn’t mention, and which complicates any effort to draw conclusions from it.
Yup, I totally buy that that happens, including in more ordinary day-to-day ways. At the point where a meme has integrated itself into the feeling-triggers directly, I’m willing to say “ok this meme has become a part of this person’s actual values”. As with heroin, this is a thing which one typically wants to avoid under one’s current values, but once it’s happened there’s no particular reason to undo it (at least from the first-person perspective; obviously people try to overwrite others’ values all the time).
My own perspective is that what you call Human Values and Goodness are both potential sources (along with others) of “My Real Values”, which I’ll only be able to really figure out after doing or learning a lot more philosophy (e.g., to figure out which ones I really want to, or should, keep or discard, or how to answer questions like the above). In the meantime, my main goals are to preserve/optimize my option values and ability to eventually do/learn such philosophy, and don’t do anything that might turn out to be really bad according to “My Real Values” (like deny some strong short-term desire, or commit a potential moral atrocity), using something like Bostrom and Ord’s Moral Parliament model for handling moral uncertainty.
At some point, somewhere in this process, one needs to figure out what counts as evidence about value, i.e. what crosses the is-ought gap. And I would be real damn paranoid about giving a memetic egregore de-facto write access to the “ought” side of the is-ought gap.
Suppose I’m a sadist who derives a lot of pleasure/reward from torturing animals, but also my parents and everyone else in society taught me that torturing animals is wrong. According to your posts, this implies that my Values = “torturing animals has high value”, and Goodness = “don’t torturing animals”, and I shouldn’t follow Goodness unless it actually lets me better satisfy my values better long-term, in other words allows me to torture more animals in the long run. Am I understanding your ideas correctly?
[...]
Assuming I am understanding you correctly, this would be a controversial position to say the least, and counter to many people’s intuitions or metaethical beliefs.
I’d flag that there’s still instrumental considerations, i.e. other people assign (a lot of) negative value to animals being tortured and I probably want to still be friends with those people so I might want to avoid the torture for practical reasons.
That said, steelmanning: in a world where basically all humans enjoyed torturing animals, yes, those alternate-humans should-according-to-their-own-values torture lots of animals. Obviously that is controversial, but also-obviously it’s one of those things that’s controversial mostly for stupid reasons (i.e. people really want to find some reason why their own values are the One True Universal Good), not for good reasons.
I don’t know if this is johnswentworth’s intended meaning but I read this more as “instructions to be effective”, or “a discussion of how things are” not “approval of hypothetical alternate values”.
It is true that for a person to most effectively seek their own values they need to seek their own values rather than the values suggested by goodness. I don’t think agreeing with or discussing that sentiment should imply an approval of alternate values other people might have.
If someone did value torturing animals, I would want them to seek pleasure in the simulated torture of animals and for them to be prevented from torturing real animals because that is part of my values which are the ones I am trying to seek regardless of the values suggested by goodness or animal torturer’s values.
I think “people having freedom and capability to seek their own values” is also part of my values. It is a part that makes me want people to understand the relationship between their values and the values suggested by goodness and that really does create a contradiction in my values, but I don’t believe discussing the relationship between, or inequality of, peoples values and the values suggested by goodness should imply my values are permissive towards animal torturer’s values.
Still, I think the implication you have pointed out is a good one to clarify. Does my clarification make sense? I prefer it to johnswentworth’s steelmanning in his reply to your comment. Although, I agree with his sentiment that humans should be trying to understand our own values and negotiating and coordinating between people with different values, rather than seeking to find some objectively true values that I don’t believe exist.
I’ve now read your linked posts, but can’t derive from them how you would answer my questions. Do you want to take a direct shot at answering them? And also the following question/counter-argument?
Suppose I’m a sadist who derives a lot of pleasure/reward from torturing animals, but also my parents and everyone else in society taught me that torturing animals is wrong. According to your posts, this implies that my Values = “torturing animals has high value”, and Goodness = “don’t torturing animals”, and I shouldn’t follow Goodness unless it actually lets me better satisfy my values better long-term, in other words allows me to torture more animals in the long run. Am I understanding your ideas correctly?
(Edit: It looks like @Johannes C. Mayer made a similar point under one of your previous posts.)
Assuming I am understanding you correctly, this would be a controversial position to say the least, and counter to many people’s intuitions or metaethical beliefs. I think metaethics is a hard problem, and I probably can’t easily convince you that you’re wrong. But maybe I can at least convince you that you shouldn’t be as confident in these ideas as you appear to be, nor present them to “lower-level readers” without indicating how controversial / counterintuitive-to-many the implications of your ideas are.
From the top:
Not quite either of those, but if we’re speaking loosely then the real-world things that trigger our feelings. Definitely not the feelings themselves.
It’s already false today for things like e.g. heroin; drugs already make it possible to overwrite our values if we so choose. I would reason about future opportunities to overwrite our values in much the same way I reason about heroin today (and in much the same way which I think most people reason about heroin today).
Yup, I totally buy that that happens, including in more ordinary day-to-day ways. At the point where a meme has integrated itself into the feeling-triggers directly, I’m willing to say “ok this meme has become a part of this person’s actual values”. As with heroin, this is a thing which one typically wants to avoid under one’s current values, but once it’s happened there’s no particular reason to undo it (at least from the first-person perspective; obviously people try to overwrite others’ values all the time).
At some point, somewhere in this process, one needs to figure out what counts as evidence about value, i.e. what crosses the is-ought gap. And I would be real damn paranoid about giving a memetic egregore de-facto write access to the “ought” side of the is-ought gap.
I’d flag that there’s still instrumental considerations, i.e. other people assign (a lot of) negative value to animals being tortured and I probably want to still be friends with those people so I might want to avoid the torture for practical reasons.
That said, steelmanning: in a world where basically all humans enjoyed torturing animals, yes, those alternate-humans should-according-to-their-own-values torture lots of animals. Obviously that is controversial, but also-obviously it’s one of those things that’s controversial mostly for stupid reasons (i.e. people really want to find some reason why their own values are the One True Universal Good), not for good reasons.
I don’t know if this is johnswentworth’s intended meaning but I read this more as “instructions to be effective”, or “a discussion of how things are” not “approval of hypothetical alternate values”.
It is true that for a person to most effectively seek their own values they need to seek their own values rather than the values suggested by goodness. I don’t think agreeing with or discussing that sentiment should imply an approval of alternate values other people might have.
If someone did value torturing animals, I would want them to seek pleasure in the simulated torture of animals and for them to be prevented from torturing real animals because that is part of my values which are the ones I am trying to seek regardless of the values suggested by goodness or animal torturer’s values.
I think “people having freedom and capability to seek their own values” is also part of my values. It is a part that makes me want people to understand the relationship between their values and the values suggested by goodness and that really does create a contradiction in my values, but I don’t believe discussing the relationship between, or inequality of, peoples values and the values suggested by goodness should imply my values are permissive towards animal torturer’s values.
Still, I think the implication you have pointed out is a good one to clarify. Does my clarification make sense? I prefer it to johnswentworth’s steelmanning in his reply to your comment. Although, I agree with his sentiment that humans should be trying to understand our own values and negotiating and coordinating between people with different values, rather than seeking to find some objectively true values that I don’t believe exist.