Concerning preferences, what else is part of morality besides preferences?
A “source of normativity” is just anything that can justify a should or ought statement. The uncontroversial example is that goals/desires/preferences can justify hypothetical ought statements (hypothetical imperatives). So Eliezer is on solid footing there.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
My own position is that only goals/desires/preferences provide normativity, because the other proposed sources of normativity either don’t provide normativity or don’t exist. But if Eliezer thinks that something besides goals/desires/preferences can provide normativity, I’d like to know what that is.
I’ll do some reading and see if I can figure out what your last paragraph means; thanks for the link.
Concerning preferences, what else is part of morality besides preferences?
“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
I don’t think introducing categories like this is helpful. There are moral arguments that move you, and a framework that responds to the right moral arguments which we term “morality”, things that should move you. The arguments are allowed to be anything (before you test them with the framework), and real humans clearly fail to be ideal implementations of the framework.
(Here, the focus is on acceptance/rejection of moral arguments; decision theory would have you generate these yourself in the way they should be considered, or even self-improve these concepts out of the system if that will make it better.)
“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on. The meta-ethics sequence is something like 300 pages long and very dense and I can’t keep it all in my head at the same time. I have serious reservations about reflective equilibrium (ala Brandt, Stich, and others). Do you have any thoughts on the role of reflective equilibrium in Eliezer’s meta-ethics?
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.
I’ll do some reading and see if I can figure out what your last paragraph means; thanks for the link.
Ah, have you not actually read through the whole sequence yet? I don’t recommend reading it out of order, and I do recommend reading the whole thing. Mainly because some people in this thread (and elsewhere) are giving completely wrong summaries of it, so you would probably get a much clearer picture of it from the original source.
I’ve read the series all the way through, twice, but large parts of it didn’t make sense to me. By reading the linked post again, I’m hoping to combine what you’ve said with what it says and come to some understanding.
Concerning preferences, what else is part of morality besides preferences?
“Inseparably Right” discusses that a bit, though again, I don’t recommend reading it out of order.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
These stand out to me as wrong questions. I think the sequence mostly succeeded at dissolving them for me; “Invisible Frameworks” is probably the most focused discussion of that.
I do take some confort in the fact that at least at this point, even pros like Robin Hanson and Toby Ord couldn’t make sense of what Eliezer was arguing, even after several rounds of back-and-forth between them.
Thanks for this!
Concerning preferences, what else is part of morality besides preferences?
A “source of normativity” is just anything that can justify a should or ought statement. The uncontroversial example is that goals/desires/preferences can justify hypothetical ought statements (hypothetical imperatives). So Eliezer is on solid footing there.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
My own position is that only goals/desires/preferences provide normativity, because the other proposed sources of normativity either don’t provide normativity or don’t exist. But if Eliezer thinks that something besides goals/desires/preferences can provide normativity, I’d like to know what that is.
I’ll do some reading and see if I can figure out what your last paragraph means; thanks for the link.
“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
I don’t think introducing categories like this is helpful. There are moral arguments that move you, and a framework that responds to the right moral arguments which we term “morality”, things that should move you. The arguments are allowed to be anything (before you test them with the framework), and real humans clearly fail to be ideal implementations of the framework.
(Here, the focus is on acceptance/rejection of moral arguments; decision theory would have you generate these yourself in the way they should be considered, or even self-improve these concepts out of the system if that will make it better.)
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on. The meta-ethics sequence is something like 300 pages long and very dense and I can’t keep it all in my head at the same time. I have serious reservations about reflective equilibrium (ala Brandt, Stich, and others). Do you have any thoughts on the role of reflective equilibrium in Eliezer’s meta-ethics?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.
Ah, have you not actually read through the whole sequence yet? I don’t recommend reading it out of order, and I do recommend reading the whole thing. Mainly because some people in this thread (and elsewhere) are giving completely wrong summaries of it, so you would probably get a much clearer picture of it from the original source.
I’ve read the series all the way through, twice, but large parts of it didn’t make sense to me. By reading the linked post again, I’m hoping to combine what you’ve said with what it says and come to some understanding.
“Inseparably Right” discusses that a bit, though again, I don’t recommend reading it out of order.
These stand out to me as wrong questions. I think the sequence mostly succeeded at dissolving them for me; “Invisible Frameworks” is probably the most focused discussion of that.
I do take some confort in the fact that at least at this point, even pros like Robin Hanson and Toby Ord couldn’t make sense of what Eliezer was arguing, even after several rounds of back-and-forth between them.
But I’ll keep trying.