As far as I understand Eliezer’s metaethics, I would say that it is compatible with deontology. It even presupposes it a little bit, since the psychological unity of mankind can be seen as a very general set of deontologies. I would agree thus that deontology is what human instincts are based on.
Under my further elaboration on said metaethics, that is the view of morality as common computations + local patches, deontology and consequentialism are not really opposing theories. In the evolution of a species, morality would be formed as common computations that are passed genetically between generations, thereby forming not much a set of “I must”, but a subtler context of presuppositions. But as the species evolves and it gets more and more intelligent, it faces newer and newer challenges, often at a speed that doesn’t allow genetic filtering and propagation. In that case, it seems to me that consequentialism is the only applicable way to find new optimal solutions, sometimes even at odd with older instincts.
“Optimal” by what value? Since we don’t have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal. This leads to problems. Take a Hypothetical Case A.
-In Case A there are several options. One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option’s taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run.
This is an extreme case, but it shows the problem at it’s worst. Elizier would say that doing the consequentialist thing would be the Right thing to do. However, he cannot have any compelling reason to do it based on his reasons for morality- an innate desire to act that way being the only reason he has for it.
Well, I intended it in the minimal sense of “maximizing an optimization problem”, if the moral quandary could be seen in that way. I was not asserting that consequentialism is the optimal way to find a solution to a moral problem, I stated that it seems to me that consequentialism is the only way to find an optimal solution to a moral problem that our previous morality cannot cover.
Since we don’t have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal.
But we do have an objective morality (in Eliezer’s metaethics): it’s morality! As far as I can understand, he states that morality is the common human computation to assign values to states of the world around us. I believe that he asserts these two things, besides others:
morality is objective in the sense that it’s a common fundamental computation, shared by all humans;
even if we encounter an alien way to assign value to states of the world (e.g. pebblesorters), we could not call that morality, because we cannot go outside of our moral system; we should call it another way, and it would not be morally understandable.
That is: human value computation → morality; pebblesorters value computation → primality, which is not: moral, fair, just, etc.
One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option’s taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run.
I agree that a direct conflict between a deontological computation and a consequentalist one cannot be solved normatively by metaethics. At least, not by the one exposed here or the one I ascribe to. However, I believe that it doesn’t need to: it’s true that morality, if confronted with truly alien value computations like primality or clipping, it’s rather monolithic, however, if zoomed in it can be rather confused. I would say that in any situation where there’s such a conflict, only the individual computation present in the actor’s mind could determine the outcome. If you want, computational metaethics is descriptive and maybe predictive, rather than prescriptive.
As far as I understand Eliezer’s metaethics, I would say that it is compatible with deontology. It even presupposes it a little bit, since the psychological unity of mankind can be seen as a very general set of deontologies.
I would agree thus that deontology is what human instincts are based on.
Under my further elaboration on said metaethics, that is the view of morality as common computations + local patches, deontology and consequentialism are not really opposing theories. In the evolution of a species, morality would be formed as common computations that are passed genetically between generations, thereby forming not much a set of “I must”, but a subtler context of presuppositions. But as the species evolves and it gets more and more intelligent, it faces newer and newer challenges, often at a speed that doesn’t allow genetic filtering and propagation.
In that case, it seems to me that consequentialism is the only applicable way to find new optimal solutions, sometimes even at odd with older instincts.
“Optimal” by what value? Since we don’t have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal. This leads to problems. Take a Hypothetical Case A.
-In Case A there are several options. One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option’s taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run.
This is an extreme case, but it shows the problem at it’s worst. Elizier would say that doing the consequentialist thing would be the Right thing to do. However, he cannot have any compelling reason to do it based on his reasons for morality- an innate desire to act that way being the only reason he has for it.
Well, I intended it in the minimal sense of “maximizing an optimization problem”, if the moral quandary could be seen in that way. I was not asserting that consequentialism is the optimal way to find a solution to a moral problem, I stated that it seems to me that consequentialism is the only way to find an optimal solution to a moral problem that our previous morality cannot cover.
But we do have an objective morality (in Eliezer’s metaethics): it’s morality! As far as I can understand, he states that morality is the common human computation to assign values to states of the world around us. I believe that he asserts these two things, besides others:
morality is objective in the sense that it’s a common fundamental computation, shared by all humans;
even if we encounter an alien way to assign value to states of the world (e.g. pebblesorters), we could not call that morality, because we cannot go outside of our moral system; we should call it another way, and it would not be morally understandable.
That is: human value computation → morality; pebblesorters value computation → primality, which is not: moral, fair, just, etc.
I agree that a direct conflict between a deontological computation and a consequentalist one cannot be solved normatively by metaethics. At least, not by the one exposed here or the one I ascribe to. However, I believe that it doesn’t need to: it’s true that morality, if confronted with truly alien value computations like primality or clipping, it’s rather monolithic, however, if zoomed in it can be rather confused.
I would say that in any situation where there’s such a conflict, only the individual computation present in the actor’s mind could determine the outcome. If you want, computational metaethics is descriptive and maybe predictive, rather than prescriptive.