suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory
People often mistakenly think they are above average at tasks and skills such as driving. This has implications for people who are members of the set of people who believe themselves above average, without changing how well members of the set of people who are actually above average at driving can drive.
Humans often mistakenly think they face trolley problems when they really don’t. This has implications for people who believe they face a trolley problem, without directly changing what constitutes a good response by someone who actually faces a trolley problem.
If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine
If your decision depends on referencing people’s hypothetical reflectively endorsed morality, then you are not simply going with your preferences about morality, divorced from the moral systems of the many people in question. Your original thought process was about the morality of the act independent of those people’s preferences,and it determined one choice was right. Having checked others’ reflective morality, it’s in an important sense a coincidence that you conclude that the same act is the right one. You are performing a new calculation (that it is largely comprised of the old one is irrelevant), and so should not say you are “just” going with “[your]” preferences.
That you are ignoring people’s stated preferences in both calculations (which remember, have the same conclusion) is similarly irrelevant. In the second but not the first you weigh people’s reflective morality, so despite other (conspicuous) similarities between the calculations, there was no going back to the original calculation in reaching the second conclusion.
I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert.
If in your hypothetical they are informed and this hurts them, they’re getting more than a speck’s worth, eh?
I may be willing to accept this sacrifice
You’re willing to accept the sacrifice of others having negative utility?
blatantly evil with respect to their current preferences
It’s OK to admit an element of an action was bad—it’s not really an admission of a flaw. We can celebrate the event of the death of Bin Laden without saying it’s good that a helicopter crashed in the operation to get him. We can celebrate the event of his death without saying that one great element of his death was that he felt some pain. We can still love life and be happy for all Americans and especially the SEAL who shot him, that he got to experience what really has to feel FANTASTIC, without saying it’s, all else equal, good OBL is dead rather than alive. All else is not equal, as we are overall much better off without him.
The right choice will have some negative consequences, but to say it is partly evil is misleadingly calling attention to an irrelevancy, if it isn’t an outright misuse of “evil”.
Bonus problem 1: Taking trolleys seriously
Or to test the subject to see if he or she is trustworthy...or for reasons I can’t think of.
Humans often mistakenly think they face trolley problems when they really don’t. This has implications for people who believe they face a trolley problem, without directly changing what constitutes a good response by someone who actually faces a trolley problem.
If your decision depends on referencing people’s hypothetical reflectively endorsed morality, then you are not simply going with your preferences about morality, divorced from the moral systems of the many people in question.
Yeah, so this gets a little tricky because the decision forks depending on whether or not you think most people would themselves care about their future smarter selves’ values, or whether you think they don’t care but they’re wrong for not caring. (The meta levels are really blending here, which is a theme I didn’t want to avoid but unfortunately I don’t think I came up with an elegant way to acknowledge their importance while keeping the spirit of the post, which is more about noticing confusion and pointing out lots of potential threads of inquiry than it is an analysis, since a real analysis would take a ton of analytic philosophy, I think.)
That you are ignoring people’s stated preferences in both calculations (which remember, have the same conclusion) is similarly irrelevant. In the second but not the first you weigh people’s reflective morality, so despite other (conspicuous) similarities between the calculations, there was no going back to the original calculation in reaching the second conclusion.
Ah, I was trying to hint at 3 main branches of calculation; perhaps I will add an extra sentence to delineate the second one more. The first is the original “go with whatever my moral intuitions say”, the second is “go with whatever everyone’s moral intuitions say, magically averaged”, and the third is “go with what I think everyone would upon reflection think is right, taking into account their current intuitions as evidence but not as themselves the source of justifiedness”. The third and the first are meant to look conspicuously like each other but I didn’t mean to mislead folk into thinking the third explicitly used the first calculation. The conspicuous similarity stems from the fact that the actual process you would go through to reach the first and the third positions are probably the same.
The right choice will have some negative consequences, but to say it is partly evil is misleadingly calling attention to an irrelevancy, if it isn’t an outright misuse of “evil”
I used some rhetoric, like using the word ‘evil’ and not rounding 3^^^3+1 to just 3^^^3, to highlight how the people whose fate you’re choosing might perceive both the problem and how you’re thinking about the problem. It’s just… I have a similar reaction when thinking about a human self-righteously proclaiming ‘Kill them all, God will know His own.’, but I feel like it’s useful that a part of me always kicks in and says I’m probably doing the same damn thing in ways that are just less obvious. But maybe it is not useful.
The contrast between ’those who are dreaming think they awake, but those who are awake know they are awake
I have an uncommon relationship with the dream world, as I remember many dreams every night. I often dream within a dream, I might do this more often than most because dreams occupy a larger portion of my thoughts than they do in others, or I might just be remembering those dreams more than most do. When I wake up within a dream, I often think I am awake. On the other hand, sometimes in the middle of dreams I know I am dreaming. Usually it’s not something I think about while asleep or awake.
I also have hypnopompic sleep paralysis, and sometimes wake up thinking I am dead. This is like the inverse of sleep walking—the mind wakes up some time before the body can move. I’m not exactly sure if one breathes during this period or not, but it’s certainly impossible to consciously breathe and one immediately knows that one cannot, so if one does not think oneself already dead (which for me is rare) one thinks one will suffocate soon. Confabulating something physical blocking the mouth or constricting the trunk can occur. It’s actually not as bad to think one is dead, because then one is (sometimes) pleasantly surprised by the presence of an afterlife (even if movement is at least temporarily impossible) and one does not panic about dying—at least I don’t.
So all in all I’d say I have less respect for intuitions like that than most do.
One point is that I feel very unconfused. That is, not only do I not feel confused now, I once felt confused and experienced what I thought was confusion lifting and being replaced by understanding.
I feel like it’s useful that a part of me always kicks in
Which one, if just one, criteria for usefulness are you using here? It is useful for the human to have pain receptors, but there is negative utility in being vulnerable to torture (and not just from one’s personal perspective).
Surely you don’t expect that even the most useful intuition is always right? This is similar to the Bin Laden point above, that the most justified and net-good action will almost certainly have negative consequences.
I’m willing to call your intuition useful if it often saves you from being misled, and its score on any particular case is not too important in its overall value.
However, its score on any particular case is indicative of how it would do in similar cases. If it has a short track record and it fails this test, we have excellent reason to believe it is a poorly tuned intuition because we know little other than how it did on this hypothetical, though its poor performance on this hypothetical should never be considered a significant factor in what makes it generally out of step with moral dilemmas regardless. This is analogous to getting cable ratings from only a few tracked boxes: we think many millions watched a show because many of the thousands tracked did, but do not think those thousand constitute a substantial portion of the audience.
Which one, if just one, criteria for usefulness are you using here? It is useful for the human to have pain receptors, but there is negative utility in being vulnerable to torture (and not just from one’s personal perspective).
That’s the one I’m referencing. My fear of having been terribly immoral (which could also be even less virtuously characterized as being or at least being motivated by an unreasonable fear of negative social feedback) is useful because it increases the extent to which I’m reflective on my decisions and practical moral positions, especially in situations that pattern match to ones that I’ve already implicitly labeled as ‘situations where it would be easy to deceive myself into thinking I had a good justification when I didn’t’, or ‘situations where it would be easy to throw up my hands because it’s not like anyone could actually expect me to be perfect’. Vegetarianism is a concrete example. The alarm itself (though perhaps not the state of mind that summons it) has been practically useful in the past, even just from a hedonic perspective.
OK, sometimes you will end up making the same decision after reflection and having wasted time, other times you may even change from a good decision (by all relevant criteria) to a bad one simply because your self-reflection was poorly executed. That doesn’t necessarily mean there’s something wrong with you for having a fear or with your fear (though it seems too strong in my opinion).
This should be obvious—it wasn’t to me until after reading your comment the second time—but “increases the extent to which I’m reflective” really ought to sound extraordinarily uncompelling to us. Think about it: a bias increases the extent to which you do something. It should be obvious that that thing is not always good to increase, and the only reason it seems otherwise to us is that we automatically assume there are biases in the opposite direction that won’t be exceeded however much we try to bias ourselves. Even so, to combat bias with bias—it’s not ideal.
I used some rhetoric, like using the word ‘evil’ and not rounding 3^^^3+1 to just 3^^^3, to highlight how the people whose fate you’re choosing might perceive both the problem and how you’re thinking about the problem. It’s just… I have a similar reaction when thinking about a human self-righteously proclaiming ‘Kill them all, God will know His own.’, but I feel like it’s useful that a part of me always kicks in and says I’m probably doing the same damn thing in ways that are just less obvious. But maybe it is not useful.
Umm, didn’t you (non-trollishly) advocate indiscriminately murdering anyone and everyone accused of heresy as long as it’s the Catholic Church doing it?
People often mistakenly think they are above average at tasks and skills such as driving. This has implications for people who are members of the set of people who believe themselves above average, without changing how well members of the set of people who are actually above average at driving can drive.
Humans often mistakenly think they face trolley problems when they really don’t. This has implications for people who believe they face a trolley problem, without directly changing what constitutes a good response by someone who actually faces a trolley problem.
If your decision depends on referencing people’s hypothetical reflectively endorsed morality, then you are not simply going with your preferences about morality, divorced from the moral systems of the many people in question. Your original thought process was about the morality of the act independent of those people’s preferences,and it determined one choice was right. Having checked others’ reflective morality, it’s in an important sense a coincidence that you conclude that the same act is the right one. You are performing a new calculation (that it is largely comprised of the old one is irrelevant), and so should not say you are “just” going with “[your]” preferences.
That you are ignoring people’s stated preferences in both calculations (which remember, have the same conclusion) is similarly irrelevant. In the second but not the first you weigh people’s reflective morality, so despite other (conspicuous) similarities between the calculations, there was no going back to the original calculation in reaching the second conclusion.
If in your hypothetical they are informed and this hurts them, they’re getting more than a speck’s worth, eh?
You’re willing to accept the sacrifice of others having negative utility?
It’s OK to admit an element of an action was bad—it’s not really an admission of a flaw. We can celebrate the event of the death of Bin Laden without saying it’s good that a helicopter crashed in the operation to get him. We can celebrate the event of his death without saying that one great element of his death was that he felt some pain. We can still love life and be happy for all Americans and especially the SEAL who shot him, that he got to experience what really has to feel FANTASTIC, without saying it’s, all else equal, good OBL is dead rather than alive. All else is not equal, as we are overall much better off without him.
The right choice will have some negative consequences, but to say it is partly evil is misleadingly calling attention to an irrelevancy, if it isn’t an outright misuse of “evil”.
Or to test the subject to see if he or she is trustworthy...or for reasons I can’t think of.
I’m having trouble inferring your point here… The contrast between ‘those who are dreaming think they are awake, but those who are awake know they are awake’ and “I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings” is always on the edges of every moral calculation, and especially every one that actually matters. (I guess it might sound like I’m suggesting reveling in doubt, but noticing confusion is always so that we can eventually become confused on a higher level and about more important things. Once you notice a confusion, you get to use curiosity!)
Yeah, so this gets a little tricky because the decision forks depending on whether or not you think most people would themselves care about their future smarter selves’ values, or whether you think they don’t care but they’re wrong for not caring. (The meta levels are really blending here, which is a theme I didn’t want to avoid but unfortunately I don’t think I came up with an elegant way to acknowledge their importance while keeping the spirit of the post, which is more about noticing confusion and pointing out lots of potential threads of inquiry than it is an analysis, since a real analysis would take a ton of analytic philosophy, I think.)
Ah, I was trying to hint at 3 main branches of calculation; perhaps I will add an extra sentence to delineate the second one more. The first is the original “go with whatever my moral intuitions say”, the second is “go with whatever everyone’s moral intuitions say, magically averaged”, and the third is “go with what I think everyone would upon reflection think is right, taking into account their current intuitions as evidence but not as themselves the source of justifiedness”. The third and the first are meant to look conspicuously like each other but I didn’t mean to mislead folk into thinking the third explicitly used the first calculation. The conspicuous similarity stems from the fact that the actual process you would go through to reach the first and the third positions are probably the same.
I used some rhetoric, like using the word ‘evil’ and not rounding 3^^^3+1 to just 3^^^3, to highlight how the people whose fate you’re choosing might perceive both the problem and how you’re thinking about the problem. It’s just… I have a similar reaction when thinking about a human self-righteously proclaiming ‘Kill them all, God will know His own.’, but I feel like it’s useful that a part of me always kicks in and says I’m probably doing the same damn thing in ways that are just less obvious. But maybe it is not useful.
I have an uncommon relationship with the dream world, as I remember many dreams every night. I often dream within a dream, I might do this more often than most because dreams occupy a larger portion of my thoughts than they do in others, or I might just be remembering those dreams more than most do. When I wake up within a dream, I often think I am awake. On the other hand, sometimes in the middle of dreams I know I am dreaming. Usually it’s not something I think about while asleep or awake.
I also have hypnopompic sleep paralysis, and sometimes wake up thinking I am dead. This is like the inverse of sleep walking—the mind wakes up some time before the body can move. I’m not exactly sure if one breathes during this period or not, but it’s certainly impossible to consciously breathe and one immediately knows that one cannot, so if one does not think oneself already dead (which for me is rare) one thinks one will suffocate soon. Confabulating something physical blocking the mouth or constricting the trunk can occur. It’s actually not as bad to think one is dead, because then one is (sometimes) pleasantly surprised by the presence of an afterlife (even if movement is at least temporarily impossible) and one does not panic about dying—at least I don’t.
So all in all I’d say I have less respect for intuitions like that than most do.
One point is that I feel very unconfused. That is, not only do I not feel confused now, I once felt confused and experienced what I thought was confusion lifting and being replaced by understanding.
Which one, if just one, criteria for usefulness are you using here? It is useful for the human to have pain receptors, but there is negative utility in being vulnerable to torture (and not just from one’s personal perspective).
Surely you don’t expect that even the most useful intuition is always right? This is similar to the Bin Laden point above, that the most justified and net-good action will almost certainly have negative consequences.
I’m willing to call your intuition useful if it often saves you from being misled, and its score on any particular case is not too important in its overall value.
However, its score on any particular case is indicative of how it would do in similar cases. If it has a short track record and it fails this test, we have excellent reason to believe it is a poorly tuned intuition because we know little other than how it did on this hypothetical, though its poor performance on this hypothetical should never be considered a significant factor in what makes it generally out of step with moral dilemmas regardless. This is analogous to getting cable ratings from only a few tracked boxes: we think many millions watched a show because many of the thousands tracked did, but do not think those thousand constitute a substantial portion of the audience.
That’s the one I’m referencing. My fear of having been terribly immoral (which could also be even less virtuously characterized as being or at least being motivated by an unreasonable fear of negative social feedback) is useful because it increases the extent to which I’m reflective on my decisions and practical moral positions, especially in situations that pattern match to ones that I’ve already implicitly labeled as ‘situations where it would be easy to deceive myself into thinking I had a good justification when I didn’t’, or ‘situations where it would be easy to throw up my hands because it’s not like anyone could actually expect me to be perfect’. Vegetarianism is a concrete example. The alarm itself (though perhaps not the state of mind that summons it) has been practically useful in the past, even just from a hedonic perspective.
OK, sometimes you will end up making the same decision after reflection and having wasted time, other times you may even change from a good decision (by all relevant criteria) to a bad one simply because your self-reflection was poorly executed. That doesn’t necessarily mean there’s something wrong with you for having a fear or with your fear (though it seems too strong in my opinion).
This should be obvious—it wasn’t to me until after reading your comment the second time—but “increases the extent to which I’m reflective” really ought to sound extraordinarily uncompelling to us. Think about it: a bias increases the extent to which you do something. It should be obvious that that thing is not always good to increase, and the only reason it seems otherwise to us is that we automatically assume there are biases in the opposite direction that won’t be exceeded however much we try to bias ourselves. Even so, to combat bias with bias—it’s not ideal.
Umm, didn’t you (non-trollishly) advocate indiscriminately murdering anyone and everyone accused of heresy as long as it’s the Catholic Church doing it?