“Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for
I am careful to avoid putting people in a position of such literal moral hazard. That is, where people I care about will end up having their current preferences better satisfied by having different preferences than their current preferences. I don’t average.
I’m confused. All people are always in position to have current preferences better satisfied by having somewhat different preferences, no?
I have no doubt the case you’re thinking of meets the criteria for “That is...”. One mistake I have made recently is to think my description fit reality because an important scenario fit according to thedescription, as I diligently checked...but so did every other outcome. Perhaps I am oversensitive right now to seeing this mistake around me, perhaps this is a false positive, or a true positive I wouldn’t have otherwise spotted, or even a true positive I would have without having experience with that mistake—seeing the flaws in others’ arguments is so very much easier than seeing them in one’s own. This is particularly true of gaps, which one naturally fills in.
If you could share some examples of when you were put in the position of putting people in the position of moral hazard, that would be great.
All people are always in position to have current preferences better satisfied by having somewhat different preferences, no?
Preferences A will be more satisfied if the agent actually had preferences B than they will be if they actually have preferences A. So the way you get what you would have wanted is by wanting something different. For example, if I have a preference for ‘1’ but I know that someone is going to average my preferences with someone who prefers 0 then I know I will make ‘1’ happen by modifying myself to prefer ‘2’ instead of ‘1’. So averaging sucks.
Yeah, evolutionary (in the Universal Darwinian sense that includes Hebbian learning) incentives for a belief, attention signal, meme, or person to game differential comparisons made by overseer/peer algorithms (who are themselves just rent-seeking half the time) whenever possible is a big source of dukkha (suffering, imperfection, off-kilteredness). An example at the memetic-societal level: http://lesswrong.com/lw/59i/offense_versus_harm_minimization/3y0k .
In the torture/specks case it’s a little tricky. If no one knows that you’re going to be averaging their preferences and won’t ever find out, and all of their preferences are already the result of billions of years of self-interested system-gaming, then at least averaging doesn’t throw more fuel on the fire. Unless preferences have evolved to exaggerate themselves to game systems-in-general due to incentives caused by the general strategy of averaging preferences, in which case you might want to have precommited to avoid averaging. Of course, it’s not like you can avoid having to take the average somewhere, at some level of organization...
Averaging by taking the mean sucks. Averaging by taking the median sucks less. It is a procedure relatively immune to gaming by would-be utility monsters.
The median is usually the ‘right’ utilitarian algorithm in any case. It minimizes total collective distance from the ‘average’. The mean minimizes total collective distance^2 from the ‘average’. There is no justification for squaring.
What’s the appropriate metric on the space of preferences? This seems like something people would have different opinions about; i.e. “People who are smart should have more say!” “People who have spent more time self-reflecting should have more say!” “People who make lifestyle choices like this should be weighted more heavily!” “People who agree with me should have more say!”
Depending on the distribution, squaring could be better, because more (might be) lost as you get further away. And of course you can only take the median if your preferences are one dimensional.
Personally, I am unconvinced that there is any fundamental justification for considering anyone’s utility but one’s own. But, if you have reason to respect the principles of democracy, the median stands out as the unique point acceptable to a majority. That is, if you specify any other point, a majority would vote to replace that point by the median.
What’s the appropriate metric on the space of preferences?
That depends on what kinds of preferences you are comparing. If you are looking at the preferences of a single person, the standard construction of that person’s utility function sets the “metric”. But if you attempt to combine the preferences of two people, you either need to use the Nash Bargaining solution or Harsanyi’s procedure for interpersonal comparison. The first gives a result that is vaguely median-like. The second gives an answer that is suitable for use with the mean.
I am careful to avoid putting people in a position of such literal moral hazard. That is, where people I care about will end up having their current preferences better satisfied by having different preferences than their current preferences. I don’t average.
I’m confused. All people are always in position to have current preferences better satisfied by having somewhat different preferences, no?
I have no doubt the case you’re thinking of meets the criteria for “That is...”. One mistake I have made recently is to think my description fit reality because an important scenario fit according to thedescription, as I diligently checked...but so did every other outcome. Perhaps I am oversensitive right now to seeing this mistake around me, perhaps this is a false positive, or a true positive I wouldn’t have otherwise spotted, or even a true positive I would have without having experience with that mistake—seeing the flaws in others’ arguments is so very much easier than seeing them in one’s own. This is particularly true of gaps, which one naturally fills in.
If you could share some examples of when you were put in the position of putting people in the position of moral hazard, that would be great.
Preferences A will be more satisfied if the agent actually had preferences B than they will be if they actually have preferences A. So the way you get what you would have wanted is by wanting something different. For example, if I have a preference for ‘1’ but I know that someone is going to average my preferences with someone who prefers 0 then I know I will make ‘1’ happen by modifying myself to prefer ‘2’ instead of ‘1’. So averaging sucks.
Yeah, evolutionary (in the Universal Darwinian sense that includes Hebbian learning) incentives for a belief, attention signal, meme, or person to game differential comparisons made by overseer/peer algorithms (who are themselves just rent-seeking half the time) whenever possible is a big source of dukkha (suffering, imperfection, off-kilteredness). An example at the memetic-societal level: http://lesswrong.com/lw/59i/offense_versus_harm_minimization/3y0k .
In the torture/specks case it’s a little tricky. If no one knows that you’re going to be averaging their preferences and won’t ever find out, and all of their preferences are already the result of billions of years of self-interested system-gaming, then at least averaging doesn’t throw more fuel on the fire. Unless preferences have evolved to exaggerate themselves to game systems-in-general due to incentives caused by the general strategy of averaging preferences, in which case you might want to have precommited to avoid averaging. Of course, it’s not like you can avoid having to take the average somewhere, at some level of organization...
Averaging by taking the mean sucks. Averaging by taking the median sucks less. It is a procedure relatively immune to gaming by would-be utility monsters.
The median is usually the ‘right’ utilitarian algorithm in any case. It minimizes total collective distance from the ‘average’. The mean minimizes total collective distance^2 from the ‘average’. There is no justification for squaring.
Is there a justification for not-squaring?
What’s the appropriate metric on the space of preferences? This seems like something people would have different opinions about; i.e. “People who are smart should have more say!” “People who have spent more time self-reflecting should have more say!” “People who make lifestyle choices like this should be weighted more heavily!” “People who agree with me should have more say!”
Depending on the distribution, squaring could be better, because more (might be) lost as you get further away. And of course you can only take the median if your preferences are one dimensional.
Personally, I am unconvinced that there is any fundamental justification for considering anyone’s utility but one’s own. But, if you have reason to respect the principles of democracy, the median stands out as the unique point acceptable to a majority. That is, if you specify any other point, a majority would vote to replace that point by the median.
That depends on what kinds of preferences you are comparing. If you are looking at the preferences of a single person, the standard construction of that person’s utility function sets the “metric”. But if you attempt to combine the preferences of two people, you either need to use the Nash Bargaining solution or Harsanyi’s procedure for interpersonal comparison. The first gives a result that is vaguely median-like. The second gives an answer that is suitable for use with the mean.