It’s been a long trip, but I think we’ve ended where I worried we would, with the One Objective Morality that all our personal moralities were imperfect reflections of.
We just aren’t “dereferencing to the same unverbalizable abstract computation”, and I think recognizing this is the first step towards making cooperative progress toward fulfilling all our individual 2place morality functions.
From within me, my 2place morality function naturally feels like a 1place morality function – actions feel “just wrong”, etc.
But despite that feeling, being a conceptual creature, and having conceptualized your function , and his function, and her function, and seeing that I need a 2place function to describe them all, I can describe my own function as one of those 2place functions as well, even though it feels like a 1place function to me.
So far, I think we’d be on the same page.
But instead of introducing some unspecified abstract ideal function, I’d emphasize the reality of our different 2place functions. Your morality is not mine, exactly, and if I want to convince you, I need to do it based on your 2place morality function. In such a discussion, the goal is a sharing of information—we try to communicate our own 2place functions to each other, and give the other person the opportunity to show each other how we can better fulfill them. You show me how I could better fulfill mine, and I show you how you could better fulfill yours.
Compare the potential for that kind of conversation to make our 2place functions individually and mutually more consistent with the discussions where both participants hypothesize a perfect ideal abstract truth, and are enraged and befuddled that the other guy “just doesn’t get it”.
Which kind of discussion do you think is more likely to increase individual and mutual coherence? Even if there were a perfect abstract ideal waiting to be discovered, wouldn’t the first kind of discussion be the way to find it?
I don’t think Eliezer claimed there is a perfect abstract one-place function of morality somewhere. From what I understood of this Sequence, he claimed that :
Morality is mostly what an algorithm feels from inside, the application of an algorithm to data, that unfolds in different ways in many different situations. The algorithm is different from people to people, it is a 2-place function.
Your morality can’t come from nowhere, you can’t teach morality to a “perfect philosopher of total emptiness” nor to a rock. The foundations of morality, the bootstrapping of it, comes from evolution and feelings/abilities/goals generated by it. Your morality can change, but you’ll have to use your previous morality to evaluate change to do to itself.
The algorithms of two humans are very close to each other, much more than the morality of pebblesorters or paper-clip optimizers. Most moral disagreements between humans come from different ways of unfolding the algorithm, due to biases, missing informations, failure to use common skills like empathy, different expectations about consequences… not because of the differences between terminal values.
It’s hard to summarize the work of another, and to summarize so many posts in 3 simple points, so don’t hesitate to correct me if I misrepresented Eliezer’s position. But that’s how I understood it, and so far I agree with it.
2) Yes. Your values change based on your current values. One issue I hadn’t brought up is that I believe your moral values are only some of your values, and do not solely determine your choices.
3) I don’t think the algorithms are that close. Along the lines of research of Jonathan Haidt, I think there are different morality pattern match algorithms along the axes of fairness, autonomy, disgust, etc. I would guess that the algorithms for each axis are similar, but the weighting between them is less similar, as borne out in Haidt’s work.
Also, when you say “unfolding the algorithm”, what does that mean, and what algorithm are you speaking of? My unfolding of my 2place algorithm?
My largest issue is the implication that our 2place functions are imperfect images of an ideal 1place function. In some places that’s the clear implication I take, and in others, it’s not. In his final summary, he explicitly says:
we are dereferencing two different pointers to the same unverbalizable abstract computation.
I think that’s just wrong. We’re using the same label, but dereferencing to different 2place functions, mine and yours, and that’s why we’re often talking at cross purposes and don’t make much progress.
Eliezer says that we end up where we started, arguing in the same way we always have. I think we should be arguing in a new way. No longer trying to bludgeon people into submission to the values of our own 2place function, mistaking it for a universal 1place function, but trying to understand the other guy’s 2place function, and appealing to that.
I think I disagree with you, but I’m not sure exactly what you mean by what you’re saying. It might help to answer these questions three:
Taboo “universal”. What do you mean by “universal 1-place function”?
In what sense do you think morality is a 2-place function? How is this function applied in decision making? Does that mean it would be wrong to stop people whose “morality” says torture is “good” from torturing people?
In what sense do you think this 2-place function is different between people? (I’m looking for a precise answer in terms of the first and second argument to the function here.)
But instead of introducing some unspecified abstract ideal function, I’d emphasize the reality of our different 2place functions. Your morality is not mine, exactly, and if I want to convince you, I need to do it based on your 2place morality function.
Humans don’t have ideal moralities stuffed in their brains, so to convince other humans (and even yourself!) you need to see what affects their minds (brains) effectively. Morality is something else, it’s not a description of how brains work, it’s a statement of how the world should be.
It’s been a long trip, but I think we’ve ended where I worried we would, with the One Objective Morality that all our personal moralities were imperfect reflections of.
We just aren’t “dereferencing to the same unverbalizable abstract computation”, and I think recognizing this is the first step towards making cooperative progress toward fulfilling all our individual 2place morality functions.
From within me, my 2place morality function naturally feels like a 1place morality function – actions feel “just wrong”, etc.
But despite that feeling, being a conceptual creature, and having conceptualized your function , and his function, and her function, and seeing that I need a 2place function to describe them all, I can describe my own function as one of those 2place functions as well, even though it feels like a 1place function to me.
So far, I think we’d be on the same page.
But instead of introducing some unspecified abstract ideal function, I’d emphasize the reality of our different 2place functions. Your morality is not mine, exactly, and if I want to convince you, I need to do it based on your 2place morality function. In such a discussion, the goal is a sharing of information—we try to communicate our own 2place functions to each other, and give the other person the opportunity to show each other how we can better fulfill them. You show me how I could better fulfill mine, and I show you how you could better fulfill yours.
Compare the potential for that kind of conversation to make our 2place functions individually and mutually more consistent with the discussions where both participants hypothesize a perfect ideal abstract truth, and are enraged and befuddled that the other guy “just doesn’t get it”.
Which kind of discussion do you think is more likely to increase individual and mutual coherence? Even if there were a perfect abstract ideal waiting to be discovered, wouldn’t the first kind of discussion be the way to find it?
I don’t think Eliezer claimed there is a perfect abstract one-place function of morality somewhere. From what I understood of this Sequence, he claimed that :
Morality is mostly what an algorithm feels from inside, the application of an algorithm to data, that unfolds in different ways in many different situations. The algorithm is different from people to people, it is a 2-place function.
Your morality can’t come from nowhere, you can’t teach morality to a “perfect philosopher of total emptiness” nor to a rock. The foundations of morality, the bootstrapping of it, comes from evolution and feelings/abilities/goals generated by it. Your morality can change, but you’ll have to use your previous morality to evaluate change to do to itself.
The algorithms of two humans are very close to each other, much more than the morality of pebblesorters or paper-clip optimizers. Most moral disagreements between humans come from different ways of unfolding the algorithm, due to biases, missing informations, failure to use common skills like empathy, different expectations about consequences… not because of the differences between terminal values.
It’s hard to summarize the work of another, and to summarize so many posts in 3 simple points, so don’t hesitate to correct me if I misrepresented Eliezer’s position. But that’s how I understood it, and so far I agree with it.
1) Yes. Different between two people.
2) Yes. Your values change based on your current values. One issue I hadn’t brought up is that I believe your moral values are only some of your values, and do not solely determine your choices.
3) I don’t think the algorithms are that close. Along the lines of research of Jonathan Haidt, I think there are different morality pattern match algorithms along the axes of fairness, autonomy, disgust, etc. I would guess that the algorithms for each axis are similar, but the weighting between them is less similar, as borne out in Haidt’s work.
Also, when you say “unfolding the algorithm”, what does that mean, and what algorithm are you speaking of? My unfolding of my 2place algorithm?
My largest issue is the implication that our 2place functions are imperfect images of an ideal 1place function. In some places that’s the clear implication I take, and in others, it’s not. In his final summary, he explicitly says:
I think that’s just wrong. We’re using the same label, but dereferencing to different 2place functions, mine and yours, and that’s why we’re often talking at cross purposes and don’t make much progress.
Eliezer says that we end up where we started, arguing in the same way we always have. I think we should be arguing in a new way. No longer trying to bludgeon people into submission to the values of our own 2place function, mistaking it for a universal 1place function, but trying to understand the other guy’s 2place function, and appealing to that.
I think I disagree with you, but I’m not sure exactly what you mean by what you’re saying. It might help to answer these questions three:
Taboo “universal”. What do you mean by “universal 1-place function”?
In what sense do you think morality is a 2-place function? How is this function applied in decision making? Does that mean it would be wrong to stop people whose “morality” says torture is “good” from torturing people?
In what sense do you think this 2-place function is different between people? (I’m looking for a precise answer in terms of the first and second argument to the function here.)
Humans don’t have ideal moralities stuffed in their brains, so to convince other humans (and even yourself!) you need to see what affects their minds (brains) effectively. Morality is something else, it’s not a description of how brains work, it’s a statement of how the world should be.