The relevant property isn’t that someone imposes something on you, but rather that you wish to discourage the behavior in question. Going to the store that charges you less 1) saves you $5 and 2) discourages stores from setting prices that are more expensive than other stores by an amount which is less than the transaction cost of shopping at the other store. This benefits you more than saving $5 does all by itself. In fact, if you make a binding precommitment to shop at the other store even if it costs you $6 more, the store will take this into account and probably won’t set the price at $5 more in the first place. (And “‘irrationally’ but predictably being willing to spend money to spite the store” is the way humans precommit.)
Jiro
Later on in the story there is stuff handling “rational revenge” where a dath ilani that is subject to a theft of a shirt with the cost of tracking down the thief being more than the value of the shirt.
This is also why it’s not irrational to spend more than $5 in time and gas to save $5 on a purchase.
I’d suggest that if the obligation doesn’t involve an inheritance, you at most have the obligation if you’d have had that obligation if the person had still been alive. You have no obligation to obey a demand from your parents that you be an accountant even when they’re still alive.
When you approach someone, include some words to the effect that you’ll gracefwly accept a rejection.
In most social situations outside of some really weird cases, it is no use to add disclaimers or clarifications. Doing this says one or more of
“I’m more worried about rejection than the average person who doesn’t use those words”. Being more worried about rejection than the average person is considered undesirable and itself makes you more likely to be rejected.
(This is true even though the statement claims to say that you are less worried about rejection. Because if you really weren’t worried about rejection, you’d have no reason to say that you aren’t worried about rejection.)
“I don’t understand how to communicate my attitude towards rejection in the normal manner, so I’m doing it in this unusual manner instead”. Not understanding how to communicate is undesirable and makes you more likely to be rejected.
I’ve also been accused of trying to “trap” people into contradicting themselves. This concept confused me for a long time; if they have inconsistent beliefs, how can that be my fault? But I think I’ve figured it out: This happens when we’re discussing a topic that they haven’t put much thought into, and they’re figuring out their beliefs as they go along.
It may be your fault.
It’s not necessarily that ideas aren’t well thought out, it’s that they implicitly have some limits that are not explicitly stated. If someone deliberately chooses to ignore those implicit limits in order to find a contradiction, that’s their fault for deliberately misunderstanding. And it’s possible to deliberately misunderstand by being literal about something which you know is not supposed to be literal. That’s what people mean by trapping.
And if you just don’t understand those implicit limits, then you’re not trying to trap them, but you’re clueless in a way whose effects resemble trying to trap someone.
Okay, let me clarify a little. If you ask some people what they think of someone who’s killed 25 people, and you ask a similar group what they think of someone who’s killed 50 people, you’re going to get responses that are not meaningfully different. Nobody’s going to advocate a more severe punishment for the 50-person killer, or say that he should be ostracized twice as much (because they’ve already decided the 25 person killer gets max and you can’t go above max), or that they would be happy with dating the 25 person killer but not the 50 person killer, or that only half the police should be used to try to catch the 25 person killer.
It seems like you would say “We have already decided you are maximally evil, so if you turn back from this course of action, or follow through, it won’t make any difference to our assessment.”
Nobody will say that. But they’ll behave that way.
when the mass murder stops killing people
Which is the equivalent of completely avoiding meat, not of eating less meat.
(And to the extent that vegetarians don’t behave with meat-eaters like they would with human killers, I’d say they don’t alieve that meat-eaters are like human killers.)
The concept of “least convenient possible world” comes in here. There may be situations in which a mixed strategy is possible without lying, but your idea applies both to those situations and to less convenient situations where it does require lying.
Why do you think it is dishonest for different people to have different levels
The argument is that you should do this “as a mixed strategy”, which would mean that even a group of people with identical beliefs would act differently based on a random factor. Furthermore, I qualified it with:
even if all those people believe exactly the same thing
so they don’t have different levels.
And even in the different levels case, it’s easy for people to pretend they have greater differences than they really do, and their actual differences may not be enough to make the statements truthful.
IMHO harm reduction is how we create the change to one day be able to protect animals with outrage alone.
It may be the case that you can save more animals if some percentage of you lie than if you all tell the truth. Whether it’s okay to lie in a “harmless” way for your ideology is a subject that’s been debated quite a lot in a number of contexts. (I do think it’s okay to lie to prevent the murder of a human outgroup, although even then, you need to be very careful, particularly with widespread lies.)
The objection to this is something that could be an interpretation of the original post: the Wizard of Oz can give fake brains to the Scarecrow because his problem isn’t really a lack of brains. But “his problem isn’t really what he says it is” is a situational thing that isn’t true for everyone, just like “writing Chinese characters is relaxing” or “sleeping outside is good for you” aren’t true for everyone, so you should be cautious generalizing it.
I think a mixed strategy with at least some people pushing the no-animals-as-food norm and others reducing animal consumption in various ways is best for the animals.
As far as norms go, this heavily violates the norm of honesty. What you are suggesting implies that even if all those people believe exactly the same thing, some should tell you not to eat animals and some should tell you to cut down. But implicit in what they’re telling you is “my moral values state that this course of action is the most moral”, and it’s not possible for both of those courses of action to be moral. You can justify one, and you can justify the other, but you can’t justify them both at the same time—either the person saying “no eating animals” or the person saying “it’s okay to just cut down” must be a liar.
(Separately, even if they didn’t all believe the exact same thing, I would still be very skeptical. If you really think eating animals is mass murder, telling me to eat fewer animals without going completely vegan is equivalent to telling me “If you do what I say, I will no longer think you are murdering 50 people, I will only think you’re murdering 25 people”. But the moral condemnation curve is pretty flat; being a murderer of 25 people is not condemned substantially less than being a murderer of 50 people.)
Evil has a higher bar, where the effects are quite bad without an acceptable reason for doing them.
But the idea isn’t selective. You don’t get to say “selecting the one person in the trolley problem inflicts not-evil pain, so you don’t feel it”—you feel the pain you inflict, whether it’s evil-pain or not-evil pain.
Pain monsters are a theoretical problem here, but I think the concept is still helpful.
It’s more than a theoretical problem. It’s basically the same problem as standard utilitarianism has, except for “disutility” you substitute “pain”. Assuming it includes emotional pain, pretty much every real-life utility monster is a pain monster. If someone works themselves up into a frenzy such that they feel real pain by having to be around Trump supporters, you have to make sure that the Trump supporters are all gone (unless Trump supporters can work themselves up into a frenzy too, and then you just feel horrible pain whichever side you take).
It also has the blissful ignorance problem, only worse. Someone might want to know unpleasant truths rather than be lied to, but if telling them the unpleasant truth inflicts pain, you’re stuck lying to them.
Evil happens when you are separated from the pain you inflict upon other people.
No, no it doesn’t.
Consider the trolley problem, where you have to hurt 1 person to save 5. Does it work better if you feel all the pain of the one person being run over by a trolley? You might argue that feeling their pain still serves the purpose of making sure you think carefully before deciding that sacrificing the 1 person really is necessary, but the problem with that reasoning is that pain is not well calibrated for getting people to make subtle, situational, decisions. It’s just “I can’t stand this much pain, run from it”.
You might further try to save the idea by suggesting that that only fails because we can’t feel pain caused by inaction, but I can’t believe that it would be good to feel pain caused by inaction—everyone who doesn’t donate as much as he can afford to to save people (and not just 10%, either) would be feeling horrible pain all the time.
You also get problems with the pain equivalent of utility monsters (in this case, beings who feel exceptionally pained at slight injuries) and people who feel pain at good things (like a religious person who feels pain because heretics exist).
That’s why I didn’t call this a bet. (I also didn’t demand he put any money on it, something which rationalists sometimes like and which has its own problems).
The thing about having a counterparty is that this is already asymmetrical. Eliezer is making a dramatic, catastrophic, prediction. If he turns out to be correct, then of course I’ll be proven wrong. I won’t have any other choice but to admit that I’m wrong, as we’re all herded into shredders so our bodies can be used to make paperclips.
But can Eliezer be proven wrong? No, not if he makes it vague about exactly how long we need to wait, and if he leaves open the possibility of “oh, I said 5 years? I meant 7. No, I meant 9....”
And if he can’t be proven wrong, he has no incentive not to exaggerate the danger. The way it should work that the more catastrophic your prediction is, the worse you look when you fail, so you’re not going to exaggerate the danger just to get people to listen to you.
Attempting to edit my account preferences and submit gives me this error:
{”id”:”users.email_already_taken”,”value”:”xxxxx@xxxxx.com“}
and fails to save the account preferences.
The scenario requires not only that they give them up, but that they give them up on a very immediate basis, which is less likely.
So the matter turns out to be just as urgent, even if you can’t predict the future. Perhaps such uncertainty only makes it more urgent.
You may not be predicting an exact future, but by claiming it is urgent, you are inherently predicting a probability distribution with a high expected value for catastrophic damage. (And as such, the more urgent your prediction, the more that failure of the prediction to come true should lower your confidence that you understand the issue.)
Eliezer is the person who made the prediction to an audience and who is being taken as an authority, not you.
Fair enough. Whatever the actual timeline is, name a number of years before the inevitable AI catastrophe that humans are not going to solve. Your post here suggests it’s at least less than thirty. If that time passes and there is no AI catastrophe, your confidence that you understand AI risk should drastically go down. (And no “well, wait a few more years after the number I originally gave you” either.)
And if you can’t name a number of years, that should heavily discount how urgent everyone should treat it as, because you are in a situation where exaggerating its urgency has no risk to you.
As epistemic learned helplessness is a thing, this will not actually work on most people.
Furthermore, your idea that fanatics can be convinced to give up resources pretty much requires fanatics. Normal people won’t behave this way.
Sure. The fact that putting pressure on the other store is an additional benefit beyond your savings doesn’t mean that putting pressure is worth any arbitrary amount. There are certainly scenarios where shopping at the cheaper store that is expensive to reach is a bad idea.
But it’s not bad just because it costs more to reach than you save on price, which is the typical rationalist line about such things.