Why do we find it natural or attractive to simplify our moral intuitions?
I’ll go with the Hansonian answer: we keep our old and complex systems of reasons-for-actions, but verbally endorse simple moral frameworks because they make it easier to argue against enemies or make allies. I don’t believe people who profess to have adopted some moral system in earnest, because all simple moral systems recommend very extreme behavior when followed to their logical conclusions.
I like Nesov’s idea of ditching the abstract phlogiston of “rightness” that doesn’t have much causal or explanatory power anyway, and thinking only about concrete and varied reasons-for-actions instead. Accepting this view (ignoring abstract moral intuitions that have no motive power) might even make things easier for CEV.
It seems to me that when someone adopts a moral system, they may not follow all of its conclusions, but their moral intuitions as well as actual actions do shift toward the recommendations of that system. Do you disagree?
I agree with that, but feel that they don’t shift by very much. And when they do shift, the causality might well run in the other direction: sometimes we change our professed morality to justify our preferred actions. And most of our actions are caused by reasons other than our current professed morality anyway, so it’s not likely to play a large role in the preferences that CEV will infer from us.
If we consider a human as a group of agents with different values, we could say that the conscious self’s values are greatly shifted when adopting a moral system, but its power is limited, because most of the human’s actions are not under its direct control. For example, someone might eat too much and gain weight as a result, even if that is against their conscious desires. Depending on technological advances, that power balance could be changed, say if someone came up with a pill lets you control your appetite.
FAI essentially let’s the conscious self have total dominance, if it chooses to. Why should CEV weigh its values according to the balance of power as of 2011?
If we consider a human as a group of agents with different values
Things like this is why it looks like a good idea to me to taboo “values”. Human includes many heuristics that together add up to what counts as an “agent”. Separate aspects/parts of a human include fewer heuristics, which makes these parts less like agents, and “values” for these parts become even less defined than for the whole.
So “human as group of agents with different values” translates as “human as a collection of parts with different structure”, which sounds far less explanatory (as it should).
I agree that sometimes it can be useful to taboo “values”. But I’m not sure why it would be helpful to taboo it here. I could rephrase my comment as saying that the subset of heuristics that corresponds to the conscious self, after adopting a new moral system, would cause a large shift in actions if it could (i.e., was given tools to overpower other conflicting heuristics), so it’s not clear that adopting new moral systems should or would have little effect on CEV. Does tabooing “values” bring any new insights to this discussion?
Does tabooing “values” bring any new insights to this discussion?
Probably not, but it lifts the illusion of understanding, which tabooing is all about. It’s good practice to avoid unnecessary imprecision or harmless equivocation.
(Also, I’d include all the heuristics into “conscious self”, not just some of them. They all have a hand in forming conscious decisions, and inability to know or precisely alter the workings of particular heuristics similarly applies to all of them. At least, the same criteria that exclude some of the heuristics from your conscious self should allow including external tools in it.)
When someone verbally endorses a given framework, I understand it as saying “This is the framework that best fits my intuitions”, but understand there are likely some points that diverge.
But maybe I am wrong and most people have actually realigned all their intuitions/behavior once they have picked a system?
I believe that abstract moral intuitions do have motive power.
For example, I have never been a very good utilitarian because I am selfish, lazy,etc. However, if one year ago you had offered me the option of becoming a very good preference utilitarian, in an abstract context where my reflexes didn’t kick in, I would have accepted it. If you had given me the option to implement an AI which was a preference utilitarian (in some as-yet-never-made-sufficiently-concrete sense which seemed reasonable to me) I would have taken it.
I also am not sure what particular extreme behavior preference utilitarianism endorses when followed to its logical conclusion. I’m not aware of any extreme consequences which I hadn’t accepted (of course, I was protected from unwanted extreme consequences by the wiggle room of a grossly under-determined theory).
I’ll go with the Hansonian answer: we keep our old and complex systems of reasons-for-actions, but verbally endorse simple moral frameworks because they make it easier to argue against enemies or make allies. I don’t believe people who profess to have adopted some moral system in earnest, because all simple moral systems recommend very extreme behavior when followed to their logical conclusions.
I like Nesov’s idea of ditching the abstract phlogiston of “rightness” that doesn’t have much causal or explanatory power anyway, and thinking only about concrete and varied reasons-for-actions instead. Accepting this view (ignoring abstract moral intuitions that have no motive power) might even make things easier for CEV.
It seems to me that when someone adopts a moral system, they may not follow all of its conclusions, but their moral intuitions as well as actual actions do shift toward the recommendations of that system. Do you disagree?
I agree with that, but feel that they don’t shift by very much. And when they do shift, the causality might well run in the other direction: sometimes we change our professed morality to justify our preferred actions. And most of our actions are caused by reasons other than our current professed morality anyway, so it’s not likely to play a large role in the preferences that CEV will infer from us.
If we consider a human as a group of agents with different values, we could say that the conscious self’s values are greatly shifted when adopting a moral system, but its power is limited, because most of the human’s actions are not under its direct control. For example, someone might eat too much and gain weight as a result, even if that is against their conscious desires. Depending on technological advances, that power balance could be changed, say if someone came up with a pill lets you control your appetite.
FAI essentially let’s the conscious self have total dominance, if it chooses to. Why should CEV weigh its values according to the balance of power as of 2011?
Things like this is why it looks like a good idea to me to taboo “values”. Human includes many heuristics that together add up to what counts as an “agent”. Separate aspects/parts of a human include fewer heuristics, which makes these parts less like agents, and “values” for these parts become even less defined than for the whole.
So “human as group of agents with different values” translates as “human as a collection of parts with different structure”, which sounds far less explanatory (as it should).
I agree that sometimes it can be useful to taboo “values”. But I’m not sure why it would be helpful to taboo it here. I could rephrase my comment as saying that the subset of heuristics that corresponds to the conscious self, after adopting a new moral system, would cause a large shift in actions if it could (i.e., was given tools to overpower other conflicting heuristics), so it’s not clear that adopting new moral systems should or would have little effect on CEV. Does tabooing “values” bring any new insights to this discussion?
Probably not, but it lifts the illusion of understanding, which tabooing is all about. It’s good practice to avoid unnecessary imprecision or harmless equivocation.
(Also, I’d include all the heuristics into “conscious self”, not just some of them. They all have a hand in forming conscious decisions, and inability to know or precisely alter the workings of particular heuristics similarly applies to all of them. At least, the same criteria that exclude some of the heuristics from your conscious self should allow including external tools in it.)
When someone verbally endorses a given framework, I understand it as saying “This is the framework that best fits my intuitions”, but understand there are likely some points that diverge.
But maybe I am wrong and most people have actually realigned all their intuitions/behavior once they have picked a system?
I believe that abstract moral intuitions do have motive power.
For example, I have never been a very good utilitarian because I am selfish, lazy,etc. However, if one year ago you had offered me the option of becoming a very good preference utilitarian, in an abstract context where my reflexes didn’t kick in, I would have accepted it. If you had given me the option to implement an AI which was a preference utilitarian (in some as-yet-never-made-sufficiently-concrete sense which seemed reasonable to me) I would have taken it.
I also am not sure what particular extreme behavior preference utilitarianism endorses when followed to its logical conclusion. I’m not aware of any extreme consequences which I hadn’t accepted (of course, I was protected from unwanted extreme consequences by the wiggle room of a grossly under-determined theory).