I just wanted to mention that you assume consequentialist thinking, specifically of the type “what should we do to change X for the better?” This is not at all how most people think. “Price gouging is unfair” is enough to pass a legislation, without heeding the consequences. “Abortion is against a sacred rule from God” is enough to fight to prohibit it. “But I can change my mind” is argument enough to two-box. And I’m not even touching issues where people don’t reason at all, or, like politicians, optimize something other than the stated goal.
Yeah, I stumbled over the price-gouging example for similar reasons. After two background examples of inconceivable worlds, the world of that story sounded similarly incoherent to me—I could not write it in 2021.
Mainly, a world where lawmakers frequently ban pricegouging is a world where it’s probably in their interest to do so. So to posit that they ban it because they’re somehow mistaken about the consequences sounds wrong to me.
Rather than the options in the story, in my model they follow Asymmetric Justice, social reality, dysfunctional incentives in bureaucracies, taboo tradeoffs, etc.: voters see an action they don’t like (pricegouging) and respond with outrage, and then lawmakers respond to this outrage by banning the action and getting rewarded by positive press or something. (Whereas if they instead argue against banning the bad action, they’re accused of supporting it.) From the perspective of the lawmakers, it doesn’t matter one bit what happens as a consequence of the ban, because these consequences are in some sense invisible.
For instance, institutions like the FDA provide constant real-life examples of this dynamic, and Zvi’s Covid posts feature multiple such stories every month.
I don’t think I assume that others are actively trying and failing at consequentialist thinking (I think if I’d been queried on this, I would have said words that largely match your perspective/predictions) but I do think that effective people and trying-to-be-effective people should definitely at least often be in a consequentialist mode.
And so I think I was pointing at something like “evaluate the stuff they’re proposing from a consequentialist lens [regardless of whether they themselves are doing so].”
This is a great point. When I feel frustrated with a faulty conversation, I often start by projecting my own motivations onto the other person. Or that their explicitly stated goals are their real goals.
Even if I know that this is wrong, I try to act as if that were so. “You say you’re doing this for the good of humanity? Then I’m going to respond as if that’s really what you cared about, and that you’re going about it badly, even if I know deep down that you’re lying about your motives.”
It’s this perverse form of “bravery.”
There’s a class of shallow incoherent lies that we’re all supposed to know are shallow lies, and yet act as if they were deep coherent truths. It can feel like bravery to “expose” the lie by taking it literally and showing how incoherence is the result.
An alternative is to focus not on bravely confronting the lie in order to expose the object level truth, but to focus on cannily understanding the motive and function of the lie itself. “Did you lie to me just now, and what’s the truth of the matter” is a very different question from, “I think you just lied to me, so why did you do that?”
“You just pretended to care about price gouging, so why did you do that” seems like a good way to confront such statements, at least some of the time.
Here’s what I meant. If someone has pretend noble motives and true ignoble motives, I sometimes get into a failure mode where I act as if the noble motives were their true motives. Then I try to show how their proposed solution will fail to achieve their pretend noble motives. There’s some sort of idea of “showing them up” behind my own behavior here, and also some idea that this behavior of mine is a “noble” form of bravery.
An alternative approach in such circumstances is to become clear in my own mind that the other person has pretend motives. Then I can interpret their behavior or their proposition in light of that. This could also be useful for self-analysis. This is what I meant when I said “confront,” but this was the wrong word to choose for that! Thanks for pointing that out.
Yeah, you are right, the only hope of getting somewhere is when you address their true objections. That’s not easy though because they might not even be aware of what they are, and refuse to acknowledge it when pointed out (again, see the examples in the thread I linked), often because acknowledging them would clash with their self-image. Successfully addressing their real arguments, not the chaff on top, is a difficult skill. If you can do it, it would feel like magic. Or a superpower.
This sounds like the proper use of empathy, as a tool for constructive exchange of perspectives.
Alice suspects Bob is not stating his true objecting to her idea. She tries to simulate the kind of mental experience that might cause Bob to resist her point of view. She bases this simulation on what she knows of Bob’s background and personality. This simulation is an example of empathy. Alice can use it to predict what sort of response might shift Bob’s point of view toward her own in a way that he would reflectively endorse.
Improper use of empathy would be creating and broadcasting an empathetic simulation that may be detached from any particular relationship. It no longer serves to make one person understand another. Instead, it creates concern for a fictional character. This character is mistaken for a real person, or group of people. Concern for this fiction motivates real action. The action it motivates is a tool that the broadcasters of this simulation can use for their own ends. When we speak of empathy as a problematic motivating force, I think that this is the underlying mechanism.
Perhaps it would be good if rationalists promoted this distinction between relational empathy and empathy for a fiction, and focused on practicing relational empathy. It might indeed be a superpower.
I just wanted to mention that you assume consequentialist thinking, specifically of the type “what should we do to change X for the better?” This is not at all how most people think. “Price gouging is unfair” is enough to pass a legislation, without heeding the consequences. “Abortion is against a sacred rule from God” is enough to fight to prohibit it. “But I can change my mind” is argument enough to two-box. And I’m not even touching issues where people don’t reason at all, or, like politicians, optimize something other than the stated goal.
Yeah, I stumbled over the price-gouging example for similar reasons. After two background examples of inconceivable worlds, the world of that story sounded similarly incoherent to me—I could not write it in 2021.
Mainly, a world where lawmakers frequently ban pricegouging is a world where it’s probably in their interest to do so. So to posit that they ban it because they’re somehow mistaken about the consequences sounds wrong to me.
Rather than the options in the story, in my model they follow Asymmetric Justice, social reality, dysfunctional incentives in bureaucracies, taboo tradeoffs, etc.: voters see an action they don’t like (pricegouging) and respond with outrage, and then lawmakers respond to this outrage by banning the action and getting rewarded by positive press or something. (Whereas if they instead argue against banning the bad action, they’re accused of supporting it.) From the perspective of the lawmakers, it doesn’t matter one bit what happens as a consequence of the ban, because these consequences are in some sense invisible.
For instance, institutions like the FDA provide constant real-life examples of this dynamic, and Zvi’s Covid posts feature multiple such stories every month.
Ooh, excellent point.
I don’t think I assume that others are actively trying and failing at consequentialist thinking (I think if I’d been queried on this, I would have said words that largely match your perspective/predictions) but I do think that effective people and trying-to-be-effective people should definitely at least often be in a consequentialist mode.
And so I think I was pointing at something like “evaluate the stuff they’re proposing from a consequentialist lens [regardless of whether they themselves are doing so].”
This is a great point. When I feel frustrated with a faulty conversation, I often start by projecting my own motivations onto the other person. Or that their explicitly stated goals are their real goals.
Even if I know that this is wrong, I try to act as if that were so. “You say you’re doing this for the good of humanity? Then I’m going to respond as if that’s really what you cared about, and that you’re going about it badly, even if I know deep down that you’re lying about your motives.”
It’s this perverse form of “bravery.”
There’s a class of shallow incoherent lies that we’re all supposed to know are shallow lies, and yet act as if they were deep coherent truths. It can feel like bravery to “expose” the lie by taking it literally and showing how incoherence is the result.
An alternative is to focus not on bravely confronting the lie in order to expose the object level truth, but to focus on cannily understanding the motive and function of the lie itself. “Did you lie to me just now, and what’s the truth of the matter” is a very different question from, “I think you just lied to me, so why did you do that?”
“You just pretended to care about price gouging, so why did you do that” seems like a good way to confront such statements, at least some of the time.
I… don’t think this works, not even in the LW circles, as some of the reaction to my old post show https://www.lesswrong.com/posts/a4HzwhvoH7zZEw4vZ/wirehead-your-chickens
You are right! I worded that very poorly.
Here’s what I meant. If someone has pretend noble motives and true ignoble motives, I sometimes get into a failure mode where I act as if the noble motives were their true motives. Then I try to show how their proposed solution will fail to achieve their pretend noble motives. There’s some sort of idea of “showing them up” behind my own behavior here, and also some idea that this behavior of mine is a “noble” form of bravery.
An alternative approach in such circumstances is to become clear in my own mind that the other person has pretend motives. Then I can interpret their behavior or their proposition in light of that. This could also be useful for self-analysis. This is what I meant when I said “confront,” but this was the wrong word to choose for that! Thanks for pointing that out.
Yeah, you are right, the only hope of getting somewhere is when you address their true objections. That’s not easy though because they might not even be aware of what they are, and refuse to acknowledge it when pointed out (again, see the examples in the thread I linked), often because acknowledging them would clash with their self-image. Successfully addressing their real arguments, not the chaff on top, is a difficult skill. If you can do it, it would feel like magic. Or a superpower.
This sounds like the proper use of empathy, as a tool for constructive exchange of perspectives.
Alice suspects Bob is not stating his true objecting to her idea. She tries to simulate the kind of mental experience that might cause Bob to resist her point of view. She bases this simulation on what she knows of Bob’s background and personality. This simulation is an example of empathy. Alice can use it to predict what sort of response might shift Bob’s point of view toward her own in a way that he would reflectively endorse.
Improper use of empathy would be creating and broadcasting an empathetic simulation that may be detached from any particular relationship. It no longer serves to make one person understand another. Instead, it creates concern for a fictional character. This character is mistaken for a real person, or group of people. Concern for this fiction motivates real action. The action it motivates is a tool that the broadcasters of this simulation can use for their own ends. When we speak of empathy as a problematic motivating force, I think that this is the underlying mechanism.
Perhaps it would be good if rationalists promoted this distinction between relational empathy and empathy for a fiction, and focused on practicing relational empathy. It might indeed be a superpower.