But it seems like Eliezer is presupposing a kind of consequentialism of rationality, both in that article and in general with the maxim “rationalists should win!”
Seems that way. Disclaimer: IHAPMOE (I have a poor model of Eliezer).
He [no longer speaking of Eliezer] simply brainwashes himself into using his Practically Ideal Moral Code because over the long run, this will be for the best according to his initial, consequentialist values.
See for example my comment on why trying to maximize happiness should increase your utility more than trying to maximize your utility would. If happiness is the derivative of utility, then maximizing happiness over a finite time period maximizes the increase in utility over that time-period. If you repeatedly engage in maximizing your happiness over a timespan that’s small relative to your lifespan, at the end of your life you’ll have attained a higher utility than someone who tried to maximize utility over those time-periods.
Must satisfy a publicity condition. That is, widespread acceptance of this set of principles should be conducive to cooperation and not lead to the same self-serving abuse problem that consequentialism has.
This variant on Kant’s maxim seems still to be universally adhered to by moralists; yet it’s wrong. I know that’s a strong claim.
The problem is that everybody has different reasoning abilities. A universal moral code, from which one could demand that it satisfy the publicity condition, must be one that is optimal for EY and for chimpanzees.
If you admit that it may be more optimal for EY to adopt a slightly more sophisticated moral code than the chimpanzees do, then satisfaction of the publicity condition implies suboptimality.
Some of the conditions for one’s Practically Ideal Moral Code mean that it’s actually not the case that everyone should use the same one. The publicity condition is a sort of a “ceteris paribus, if everyone was just as well-suited to the use of this code as you and used it, would that be okay?” You could replace this formulation of the condition with something like “if everyone did things mostly like the ones I would do under this code, would that be okay?”
That’s a more reasonable position, but I think it may be more optimal to view public morality as an ecosystem. It provides more utility to have different agents occupy different niches, even if they have equal abilities. It may have high utility for most people to eschew a particular behavior, yet society may require some people to engage in that behavior. Having multiple moral codes allows this.
that identify a situation or class of situations and call for an action in that/those situation(s).
You don’t need multiple moral codes, you just need to identify in a single moral code the situations under which it’s appropriate to perform that generally-eschewed action.
Doesn’t the publicity condition allow you to make statements like “If you have the skills to do A then do A, otherwise do B”? Similarly, to solve the case where everyone was just like you, a code can alter itself in the case that publicity cares about: “If X percent of agents are using this code, do Y, otherwise do Z.” It seems sensible to alter your behavior in both cases, even if it feels like dodging the condition.
Seems that way. Disclaimer: IHAPMOE (I have a poor model of Eliezer).
See for example my comment on why trying to maximize happiness should increase your utility more than trying to maximize your utility would. If happiness is the derivative of utility, then maximizing happiness over a finite time period maximizes the increase in utility over that time-period. If you repeatedly engage in maximizing your happiness over a timespan that’s small relative to your lifespan, at the end of your life you’ll have attained a higher utility than someone who tried to maximize utility over those time-periods.
This variant on Kant’s maxim seems still to be universally adhered to by moralists; yet it’s wrong. I know that’s a strong claim.
The problem is that everybody has different reasoning abilities. A universal moral code, from which one could demand that it satisfy the publicity condition, must be one that is optimal for EY and for chimpanzees.
If you admit that it may be more optimal for EY to adopt a slightly more sophisticated moral code than the chimpanzees do, then satisfaction of the publicity condition implies suboptimality.
Some of the conditions for one’s Practically Ideal Moral Code mean that it’s actually not the case that everyone should use the same one. The publicity condition is a sort of a “ceteris paribus, if everyone was just as well-suited to the use of this code as you and used it, would that be okay?” You could replace this formulation of the condition with something like “if everyone did things mostly like the ones I would do under this code, would that be okay?”
That’s a more reasonable position, but I think it may be more optimal to view public morality as an ecosystem. It provides more utility to have different agents occupy different niches, even if they have equal abilities. It may have high utility for most people to eschew a particular behavior, yet society may require some people to engage in that behavior. Having multiple moral codes allows this.
You don’t need multiple moral codes, you just need to identify in a single moral code the situations under which it’s appropriate to perform that generally-eschewed action.
Doesn’t the publicity condition allow you to make statements like “If you have the skills to do A then do A, otherwise do B”? Similarly, to solve the case where everyone was just like you, a code can alter itself in the case that publicity cares about: “If X percent of agents are using this code, do Y, otherwise do Z.” It seems sensible to alter your behavior in both cases, even if it feels like dodging the condition.