I understand it as: “If you had the knowledge of all facts in the universe, and unlimited intelligence and rationality to see all the connections, and you would have the same moral feelings as you have now… after reflection, what would you consider right?”
This can be attacked as:
a) Speaking about “the same moral feelings” of a person with universal knowledge and unlimited intelligence somehow does not make sense, because it’s incoherent for some reasons. I am not sure how specifically; just leaving it here as an option.
b) The results of the reflection are ambiguously defined, for example the result may strongly depend on the order of resolving conflicts. If some values are in mutual conflict, there are multiple ways to choose an internally consistent subset; it may not be obvious which of the subsets fits the original set better. (And this is why different humans would choose different subsets.)
c) Different humans could get very different results because their initial small differences could be hugely amplified by the process of finding a reflective equilibrium. Even if there is an algorithm for choosing among values A and B which is not sensitive to the order of resolving conflicts, the values may be in almost perfect balance, so that for different people different one would win.
Shortly: x-rational morality is a) ill-defined; or b) possible but ambiguous; or c) very different for different people.
Seems to me like you use a variant of the first option (and then somehow change it to the third one at the end of the article), saying more or less that a morality based on extrapolation and omniscience may feel completely immoral, or in other words that our morality objects against being extrapolated too far; that there is a contradiction between “human-like morality” and “reflectively consistent morality”.
Although I knew that Eliezer considered CEV to be an important part of his morality, I was dodging that aspect and focusing on the practical recommendations he makes. The application of CEV to the argument does not really change the facts of my argument- not only could a post-CEV A still have the same problems I describe, but a pre-CEV A could discern a post-CEV A’s conclusions well enough on a simple case and not care.
However, your summary at the end is close enough to work with. I don’t mind working with that as “my argument” and going from there.
I understand it as: “If you had the knowledge of all facts in the universe, and unlimited intelligence and rationality to see all the connections, and you would have the same moral feelings as you have now… after reflection, what would you consider right?”
This can be attacked as:
a) Speaking about “the same moral feelings” of a person with universal knowledge and unlimited intelligence somehow does not make sense, because it’s incoherent for some reasons. I am not sure how specifically; just leaving it here as an option.
b) The results of the reflection are ambiguously defined, for example the result may strongly depend on the order of resolving conflicts. If some values are in mutual conflict, there are multiple ways to choose an internally consistent subset; it may not be obvious which of the subsets fits the original set better. (And this is why different humans would choose different subsets.)
c) Different humans could get very different results because their initial small differences could be hugely amplified by the process of finding a reflective equilibrium. Even if there is an algorithm for choosing among values A and B which is not sensitive to the order of resolving conflicts, the values may be in almost perfect balance, so that for different people different one would win.
Shortly: x-rational morality is a) ill-defined; or b) possible but ambiguous; or c) very different for different people.
Seems to me like you use a variant of the first option (and then somehow change it to the third one at the end of the article), saying more or less that a morality based on extrapolation and omniscience may feel completely immoral, or in other words that our morality objects against being extrapolated too far; that there is a contradiction between “human-like morality” and “reflectively consistent morality”.
Although I knew that Eliezer considered CEV to be an important part of his morality, I was dodging that aspect and focusing on the practical recommendations he makes. The application of CEV to the argument does not really change the facts of my argument- not only could a post-CEV A still have the same problems I describe, but a pre-CEV A could discern a post-CEV A’s conclusions well enough on a simple case and not care.
However, your summary at the end is close enough to work with. I don’t mind working with that as “my argument” and going from there.