If you are a consequentialist, you probably should avoid discussing porridge breakfasts. [...] Things could get worse if this person had an agenda. They realize they have power over you.
The function of speech is to convey information—to build shared maps that reflect the territory. The reason speech is such a powerful cognitive technology is because accurate beliefs are a convergent instrumental value—whatever you’re trying to do, you’ll probably do a better job if you can make accurate predictions. When I contribute accurate information to the commons, I don’t know all the various downstream consequences of other agents incorporating that information into their maps—I don’t see how I’m supposed to compute that. Even if “You should always choose the action with the best consequences” would be the correct axiology for some superintelligent singleton God–Empress who oversees the whole universe and all the consequences in it, I’m not a God–Empress, and “Just tell the goddamned truth” (with the expectation that this is good on net, because true maps are generically useful to other agents, almost none of whom are evil) seems like a much more tractable goal for me to aim at.
Things arguably get more complicated when the aggressor thinks of themself as being on your side.
What does that even mean? I read lots of authors, including a lot of people who I would personally dislike, because I benefit from reading the information that they have to convey. But they don’t own me, and I don’t own them. Obviously. What’s this “side” thing about? Am I to be construed as being on the “side” of the so-called “rationalist” or “effective altruism” “communities” just because Eliezer Yudkowsky rewrote my personality over the internet twelve years ago? God, I hope not!
Option 4 [Don’t Censor] [...] seem fairly common though deeply unfortunate. It’s generally not very pleasant to be in a world where those who are listened to routinely select options 4 [...]
It’s not very pleasant to live in a world with terrorists trying to control what people think! And any sane blame-allocation algorithm puts the blame on the terrorists, not the people who are trying to think!
This person writes a tweet about food issues, and then a little while later some food critic gets a threat. We can consider this act a sort of provocation of malicious supporters, even if it were unintentional. [...] But we can speculate on what they might have been thinking when they did this.
I agree with Dagon that loudly condemning the malicious actors is the right play, but I’ll accept that it’s not enough to prevent harm in the least convenient possible world.
In that world, my actual response is that it’s not my fault. It’s bad for food critics to get threats! I unequivocally condemn people who do that! If there’s some sort of causal relationship between me telling the truth about food, and food critics getting threats, that runs through other agents who are not me who won’t stop their crimes even if I condemn them … well, that’s a really terrible situation, but I’m not going to stop telling the truth about food. I don’t negotiate with terrorists!
I don’t see how the usual rationale for not negotiating with terrorists applies to the food critics case. It’s not like your readers are threatening food critics as a punishment to you, with the intent to get you to stop writing. Becoming the kind of agent that stops writing in response to such behavior doesn’t create any additional incentives for others to become the kind of agent that is provoked by your writing.
Similarly, it seems to me “don’t negotiate with terrorists” doesn’t apply in cases where your opponent is harming you, but 1) is non-strategic and 2) was not modified to become non-strategic by an agent with the aim of causing you to give in to them because they’re non-strategic. (In cases where you can tell the difference and others know you can tell the difference.)
Thanks (strong-upvoted), this is a really important objection! If I were to rewrite the grandparent more carefully, I would leave off the second invocation of the “I don’t negotiate …” slogan at the end. I think I do want to go as far as counting evolutionary (including cultural-evolutionary) forces under your “modified to become non-strategic by an agent with the aim [...]” clause—but, sure, okay, if I yell in a canyon and the noise causes a landslide, we don’t want to say I was right to yell because keeping silent would be giving to to rock terrorism.
Importantly, however, in the case of the harrassed food critic, I stand by the “not my fault” response, whereas the landslide would be “my fault”. This idea of “fault” doesn’t apply to the God–Empress or other perfectly spherical generic consequentialists on a frictionless plane in a vacuum; it’s a weird thing that we can only make sense of in scenarios where multiple agents are occupying something like the same “moral reference frame”. (Real-world events have multiple causes; consequentialist agents do counterfactual queries on their models of the world in order to decide what action to output, but “who is ‘to blame’ for this event I assign negative utility” is never a question they need to answer.)
But I think blame-allocation is a really important feaure of what’s actually going on when crazy monkeys like us have these discussions that purport to be about decision theory, but are really about monkey stuff. (It’s not that I started out trying to minimize existential risk and happened to compute that going on a Free Speech for Shared Maps crusade was the optimal action; as you know, what actually happened was … well, perhaps more on this in a forthcoming post, “Motivation and Political Context of my Philosophy of Language Agenda”.) I have to admit it’s plausible that a superintelligent singleton God-Empress programmed with the ideal humane utility function would advise me to self-censor for the greater good. And coming from Her, I wouldn’t hesitate to take that advice (because She would know). But that’s not the situation I’m actually in! In the “Provoking Malicious Supporters” section of the post, Gooen writes, “This closely mirrors legal discussions of negligence, gross neglect, and malice”, but negligence and neglect are blame-allocation concepts, not single-agent decision theory concepts!
In accordance with the theory of universal algorithmic bad faith, we might speculate that some part of my monkey-brain is modeling “posts that imply speakers should be blamed for negative side-effects of their speech” as enemy propaganda from the Blight dressed up in the literary genre of consequentialism, for which my monkey-brain has cached counter-propaganda. The only reason this picture doesn’t spell complete doom for the project of advancing the art of human rationality, is that the genre constraints are actually pretty hard to satisfy and have been set up in a way that extracts real philosophical work out of monkey-brains that have Something to Protect, much as a well-functioning court is set up in a way that extracts Justice, even if (say) the defendant is only trying to save her own neck.
I don’t negotiate with terrorists! Whether or not this person consciously thinks of themselves as having an agenda, if their behavior is conditioned on mine in a way that’s optimized for controlling my behavior—the “I just wanted to let you know” message definitely counts—then I must regard them as an extortionist, who is only threatening harm because they expect the threat to succeed. It would be awfully short-sighted of me to let them get away with that—for the end of that game is oppression and shame, and the thinker that pays it is lost!
The function of speech is to convey information—to build shared maps that reflect the territory. The reason speech is such a powerful cognitive technology is because accurate beliefs are a convergent instrumental value—whatever you’re trying to do, you’ll probably do a better job if you can make accurate predictions. When I contribute accurate information to the commons, I don’t know all the various downstream consequences of other agents incorporating that information into their maps—I don’t see how I’m supposed to compute that. Even if “You should always choose the action with the best consequences” would be the correct axiology for some superintelligent singleton God–Empress who oversees the whole universe and all the consequences in it, I’m not a God–Empress, and “Just tell the goddamned truth” (with the expectation that this is good on net, because true maps are generically useful to other agents, almost none of whom are evil) seems like a much more tractable goal for me to aim at.
What does that even mean? I read lots of authors, including a lot of people who I would personally dislike, because I benefit from reading the information that they have to convey. But they don’t own me, and I don’t own them. Obviously. What’s this “side” thing about? Am I to be construed as being on the “side” of the so-called “rationalist” or “effective altruism” “communities” just because Eliezer Yudkowsky rewrote my personality over the internet twelve years ago? God, I hope not!
It’s not very pleasant to live in a world with terrorists trying to control what people think! And any sane blame-allocation algorithm puts the blame on the terrorists, not the people who are trying to think!
I agree with Dagon that loudly condemning the malicious actors is the right play, but I’ll accept that it’s not enough to prevent harm in the least convenient possible world.
In that world, my actual response is that it’s not my fault. It’s bad for food critics to get threats! I unequivocally condemn people who do that! If there’s some sort of causal relationship between me telling the truth about food, and food critics getting threats, that runs through other agents who are not me who won’t stop their crimes even if I condemn them … well, that’s a really terrible situation, but I’m not going to stop telling the truth about food. I don’t negotiate with terrorists!
I don’t see how the usual rationale for not negotiating with terrorists applies to the food critics case. It’s not like your readers are threatening food critics as a punishment to you, with the intent to get you to stop writing. Becoming the kind of agent that stops writing in response to such behavior doesn’t create any additional incentives for others to become the kind of agent that is provoked by your writing.
Similarly, it seems to me “don’t negotiate with terrorists” doesn’t apply in cases where your opponent is harming you, but 1) is non-strategic and 2) was not modified to become non-strategic by an agent with the aim of causing you to give in to them because they’re non-strategic. (In cases where you can tell the difference and others know you can tell the difference.)
Thanks (strong-upvoted), this is a really important objection! If I were to rewrite the grandparent more carefully, I would leave off the second invocation of the “I don’t negotiate …” slogan at the end. I think I do want to go as far as counting evolutionary (including cultural-evolutionary) forces under your “modified to become non-strategic by an agent with the aim [...]” clause—but, sure, okay, if I yell in a canyon and the noise causes a landslide, we don’t want to say I was right to yell because keeping silent would be giving to to rock terrorism.
Importantly, however, in the case of the harrassed food critic, I stand by the “not my fault” response, whereas the landslide would be “my fault”. This idea of “fault” doesn’t apply to the God–Empress or other perfectly spherical generic consequentialists on a frictionless plane in a vacuum; it’s a weird thing that we can only make sense of in scenarios where multiple agents are occupying something like the same “moral reference frame”. (Real-world events have multiple causes; consequentialist agents do counterfactual queries on their models of the world in order to decide what action to output, but “who is ‘to blame’ for this event I assign negative utility” is never a question they need to answer.)
But I think blame-allocation is a really important feaure of what’s actually going on when crazy monkeys like us have these discussions that purport to be about decision theory, but are really about monkey stuff. (It’s not that I started out trying to minimize existential risk and happened to compute that going on a Free Speech for Shared Maps crusade was the optimal action; as you know, what actually happened was … well, perhaps more on this in a forthcoming post, “Motivation and Political Context of my Philosophy of Language Agenda”.) I have to admit it’s plausible that a superintelligent singleton God-Empress programmed with the ideal humane utility function would advise me to self-censor for the greater good. And coming from Her, I wouldn’t hesitate to take that advice (because She would know). But that’s not the situation I’m actually in! In the “Provoking Malicious Supporters” section of the post, Gooen writes, “This closely mirrors legal discussions of negligence, gross neglect, and malice”, but negligence and neglect are blame-allocation concepts, not single-agent decision theory concepts!
In accordance with the theory of universal algorithmic bad faith, we might speculate that some part of my monkey-brain is modeling “posts that imply speakers should be blamed for negative side-effects of their speech” as enemy propaganda from the Blight dressed up in the literary genre of consequentialism, for which my monkey-brain has cached counter-propaganda. The only reason this picture doesn’t spell complete doom for the project of advancing the art of human rationality, is that the genre constraints are actually pretty hard to satisfy and have been set up in a way that extracts real philosophical work out of monkey-brains that have Something to Protect, much as a well-functioning court is set up in a way that extracts Justice, even if (say) the defendant is only trying to save her own neck.