Yeah, but it’s not necessarily the ideal way to act. Perhaps you should act generally better than that, or perhaps you should try to amplify it more. Do what you can to find out the optimal way to act. At least pay attention if you find new information. Don’t just make a guess and assume you’re correct.
You don’t think you should discourage others from hurting you? I think that seems sort of obvious. Now, if you could somehow give a person a strong incentive to help you/ not hurt, while simultaneously granting them a shitload of happiness, that seems ideal. This doesn’t really exclude that, it’s just on the positive side of doing/ being done unto.
As much as possible for the least amount of harm possible and the least amount of wasted time and resources, obviously. Which varies on a case by case basis.
I mean if it was practical, you’d give your friends 2 billion units of happiness, and then after turning the cheek to your enemies, grant them 1.9 billion units of happiness, but living on planet earth, giving you 80% of the crap you gave me seems about right.
...living on planet earth, giving you 80% of the crap you gave me seems about right.
Consider the consequences if everyone follows your rule. Assume someone gives you one unit of crap, possibly accidentally. You respond with 0.8 units. (It’s hard to measure this precisely, but for the sake of argument let’s assume that both of you manage to get it exactly right). He, in turn, responds with a further 0.64 units of crap. You respond to this with 0.512 units.
This is, of course, an infinite geometric series. The end result (over an infinite time period) is that you recieve 2 and 7⁄9 units of crap, while the other person recieves 2 and 2⁄9 units of crap. He recieves exactly 80% of the amount that you recieved, but you recieved over twice as much as you started out recieving.
If you return x% of the crap you get (for 0<x<100), and everyone else follows the same rule, then the total crap you recieve for every starting unit of crap is:
That assumes that he is following a different rule from the rule that you are following. Does knowing that he will give you the 0.64 units prevent you from giving him the 0.8 units?
Not necessarily. If I horribly torture Jim because Jim stepped on my toes, then I am not maximizing total happiness; the unhappiness given to Jim by the torture outwieghs the unhappiness in me that is prevented by having no-one step on my toes.
That’s a lot of effort and pain to prevent someone stepping on your toes.
Also, I’m not sure that’d be a terribly effective way to prevent harm to yourself. I mean, to the extent possible, once everyone knows you tortured Jim, people will be scared shitless to step on your toes, but Jim and Jim’s family are very likely to murder you, or at least sue you for all your money and put you in jail for a long time.
You are correct; it is not terribly effective. However, any disproportionate response to a minor, or even an imagined, slight will reduce total unhappiness while discouraging others from hurting me.
Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling’s recommendation.
It’s impossible to find a strategy that produces happiness better than trying to produce happiness, since if you knew of one, you’d try to produce happiness by following that strategy. If this method is what works best, then in doing what works best, you’d follow this method.
And it’s not hard to think of real life examples of atrocities “justified” on utilitarian grounds that the rest of the world thinks are anything but justifiable. The Reign of Terror during the French Revolution, for example, is generally regarded as having gone too far.
It’s better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you’re not going to do well if you just find something easier to optimize.
Most human utility functions give their own happiness more weight than other’s. If you take into account that humans increase the happiness of others because it makes themself happy, you could even say that human utility functions only care about the happiness of their corresponding humans—but that is close to a tautology (“the utility function cares about the utility of the agent only”).
How about doing unto others what maximizes total happiness, regardless of what they’d do unto you?
The former is computationally far more feasible.
By acting in a way that discourages them from hurting you, and encouraging them to help you, you are playing your part in maximizing total happiness.
Yeah, but it’s not necessarily the ideal way to act. Perhaps you should act generally better than that, or perhaps you should try to amplify it more. Do what you can to find out the optimal way to act. At least pay attention if you find new information. Don’t just make a guess and assume you’re correct.
You don’t think you should discourage others from hurting you? I think that seems sort of obvious. Now, if you could somehow give a person a strong incentive to help you/ not hurt, while simultaneously granting them a shitload of happiness, that seems ideal. This doesn’t really exclude that, it’s just on the positive side of doing/ being done unto.
You should probably discourage others from hurting you. It’s just not clear how much.
As much as possible for the least amount of harm possible and the least amount of wasted time and resources, obviously. Which varies on a case by case basis.
I mean if it was practical, you’d give your friends 2 billion units of happiness, and then after turning the cheek to your enemies, grant them 1.9 billion units of happiness, but living on planet earth, giving you 80% of the crap you gave me seems about right.
Consider the consequences if everyone follows your rule. Assume someone gives you one unit of crap, possibly accidentally. You respond with 0.8 units. (It’s hard to measure this precisely, but for the sake of argument let’s assume that both of you manage to get it exactly right). He, in turn, responds with a further 0.64 units of crap. You respond to this with 0.512 units.
This is, of course, an infinite geometric series. The end result (over an infinite time period) is that you recieve 2 and 7⁄9 units of crap, while the other person recieves 2 and 2⁄9 units of crap. He recieves exactly 80% of the amount that you recieved, but you recieved over twice as much as you started out recieving.
If you return x% of the crap you get (for 0<x<100), and everyone else follows the same rule, then the total crap you recieve for every starting unit of crap is:
%5E2%20}%0A)This is clearly minimized at x=0.
Alternatively: he could notice that he gave you 1 unit of crap and assume the 0.8 units of crap you gave him is an equal penalty.
If someone yells at you, you’re likely to respond—but if someone yells at you because you just pushed them, you’re less likely to respond.
Or he could know I was going to give him the .512 units, from prior experience, and not give .64, which is the whole point.
That assumes that he is following a different rule from the rule that you are following. Does knowing that he will give you the 0.64 units prevent you from giving him the 0.8 units?
Yes. Depending on the circumstance, I might give him much less or much more and/ or choose a different course of action entirely.
Not necessarily. If I horribly torture Jim because Jim stepped on my toes, then I am not maximizing total happiness; the unhappiness given to Jim by the torture outwieghs the unhappiness in me that is prevented by having no-one step on my toes.
That’s a lot of effort and pain to prevent someone stepping on your toes.
Also, I’m not sure that’d be a terribly effective way to prevent harm to yourself. I mean, to the extent possible, once everyone knows you tortured Jim, people will be scared shitless to step on your toes, but Jim and Jim’s family are very likely to murder you, or at least sue you for all your money and put you in jail for a long time.
You are correct; it is not terribly effective. However, any disproportionate response to a minor, or even an imagined, slight will reduce total unhappiness while discouraging others from hurting me.
No. I just told you. Sometimes a disproportionate response encourages other people to hurt you. That’s actually part of the rule.
Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling’s recommendation.
I never mentioned causation. If you find a way to maximize it acausally, do that.
It has a tendency to go horribly wrong.
It’s impossible to find a strategy that produces happiness better than trying to produce happiness, since if you knew of one, you’d try to produce happiness by following that strategy. If this method is what works best, then in doing what works best, you’d follow this method.
Also, linking to TVTropes tends to fall under generalizing from fictional evidence.
Art imitates life. ;)
And it’s not hard to think of real life examples of atrocities “justified” on utilitarian grounds that the rest of the world thinks are anything but justifiable. The Reign of Terror during the French Revolution, for example, is generally regarded as having gone too far.
Would it help if the link were aimed at the real life section?
It has been deleted to prevent edit war.
It’s a nice sentiment, but the optimization problem you suggest is usually intractable.
It’s better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you’re not going to do well if you just find something easier to optimize.
Yes, but there’s no way a pithy quote is going to solve the problem for you. It might, however, contain a useful heuristic.
You may do that if you must, I recommend against it.
Why do you recommend against it? Do you have a more complicated utility function?
Most human utility functions give their own happiness more weight than other’s. If you take into account that humans increase the happiness of others because it makes themself happy, you could even say that human utility functions only care about the happiness of their corresponding humans—but that is close to a tautology (“the utility function cares about the utility of the agent only”).