Please don’t awkwardly distance yourself because it didn’t end up saying exactly the things you would have said, unless it’s actually fucking important.
Raemon, thank for you writing this! I recommend each of us pause and reflect on how we (the rationality community) sometimes have a tendency to undermine our own efforts. See also Why Our Kind Can’t Cooperate.
Fwiw, I’m not sure if you meant this, but I don’t want to lean too hard on “why our kind can’t cooperate” here, or at least not try to use it as a moral cudgel.
I think Eliezer and Nate specifically were not attempting to do a particular kind of cooperation here (with people care about x-risk but disagree with the book’s title). They could have made different choices if they wanted to.
I this post I defend their right and reasoning for making some of those choices. But, given that they made them, I don’t want to pressure people to cooperate with the media campaign if they don’t actually think that’s right.
(There’s a different claim you may be making which is “look inside yourself and check if you’re not-cooperating for reasons you don’t actually endorse”, which I do think is good, but I think people should do that more out of loyalty to their own integrity than out of cooperation with Eliezer/Nate)
I don’t mean to imply that we can’t cooperate, but it seems to me free-thinkers often underinvest in coalition building. Mostly I’m echoing e.g. ‘it is ok to endorse a book even if you don’t agree with every point’. There is a healthy tension between individual stances and coalition membership; we should lean into these tough tradeoffs rather than retreating to the tempting comfort of purity.
If one wants to synthesize a goal that spans this tension, one can define success more broadly so as to factor in public opinion. There are at least two ways of phrasing this:
Rather than assuming one uniform standard of rigor, we can think more broadly. Plan for the audience’s knowledge level and epistemic standards.
Alternatively, define one’s top-level goal as successful information transmission rather than merely intellectual rigor. Using the information-theoretic model, plan for the channel [1] and the audience’s decoding.
I’ll give three examples here:
For a place like LessWrong, aim high. Expect that people have enough knowledge (or can get up to speed) to engage substantively with the object-level details. As I understand it, we want (and have) a community where purely strategic behavior is discouraged and unhelpful, because we want to learn together to unpack the full decision graph relating to future scenarios. [2]
For other social media, think about your status there and plan based on your priorities. You might ask questions like: What do you want to say about IABIED? What mix of advocacy, promotion, clarification, agreement, disagreement are you aiming for? How will the channel change (amplify, distort, etc) your message? How will the audience perceive your comments?
For 1-to-1 in-person discussions, you might have more room for experimentation in choosing your message and style. You might try out different objectives. There is a time and place for being mindful of short inferential distances and therefore building a case slowly and deliberately. There is also a time/place for pushing on the Overton window. What does the “persuasion graph” look like for a particular person? Can you be ok with getting someone to agree with your conclusion even if they get there from a less rigorous direction? Even if that other path isn’t durable as the person gains more knowledge? (These are hard strategic questions.)
Personally, I am lucky that get to try out many face-to-face conversations with new people many times a week to see what happens. I am not following any survey methodology; this is more open-ended and exploratory so that I can get the contours.
[1]: Technical note: some think of an information-theoretic channel as only suffering from Gaussian noise, but that’s only one case. A channel can be any conditional probability distribution p(y|x) (output given input) and need not be memoryless. (Note that the notion of a conditional probability distribution generalizes over the notion of a function, which must be deterministic by definition.)
Once upon a time, I read version of “why our kind can’t cooperate” , that was directed to secular people. I read it maybe decade ago, so I may misremember a lot of things, but that is what I remember:
there is important difference, in activism, that leads to the result that religious people win: they support actions even if they don’t agree with all things, while we don’t. secular organization will have people nitpick and disagree and then avoid contribution despite 90% agreement, while religious group will just call to act and have people act, even if they just 70% agree.
now i will say the important part is being Directionality Correct.
the organization that wrote this piece wasn’t thinking on things in Prisoner Dilemma terms, or Cooperation. all people and organizations here pursue their own goals.
and yet, this simple model looks like what happening now, to me. people concentrate about the 10% disagreement, instead of see 90% agreement and Directional Correctness and join the activism.
so, In My Model, game-theoretic cooperation is irrelevant to ability-to-cooperate. the point is that people set their threshold to joint the activism (the use of the word cooperate here may be confusing, as it reference to both joining someone on doing something and then do it together, and the game-theoretic concept) wrongly high—in a way that predictably results in group of people who have this threshold lose to group of people with lower threshold.
(I also don’t tend to see pointing out “you are using predictably losing tactic” as cudgel, but I also pretty immune to drowning child arguments, so i may be colorblind to some dynamic here.)
Raemon, thank for you writing this! I recommend each of us pause and reflect on how we (the rationality community) sometimes have a tendency to undermine our own efforts. See also Why Our Kind Can’t Cooperate.
Fwiw, I’m not sure if you meant this, but I don’t want to lean too hard on “why our kind can’t cooperate” here, or at least not try to use it as a moral cudgel.
I think Eliezer and Nate specifically were not attempting to do a particular kind of cooperation here (with people care about x-risk but disagree with the book’s title). They could have made different choices if they wanted to.
I this post I defend their right and reasoning for making some of those choices. But, given that they made them, I don’t want to pressure people to cooperate with the media campaign if they don’t actually think that’s right.
(There’s a different claim you may be making which is “look inside yourself and check if you’re not-cooperating for reasons you don’t actually endorse”, which I do think is good, but I think people should do that more out of loyalty to their own integrity than out of cooperation with Eliezer/Nate)
I don’t mean to imply that we can’t cooperate, but it seems to me free-thinkers often underinvest in coalition building. Mostly I’m echoing e.g. ‘it is ok to endorse a book even if you don’t agree with every point’. There is a healthy tension between individual stances and coalition membership; we should lean into these tough tradeoffs rather than retreating to the tempting comfort of purity.
If one wants to synthesize a goal that spans this tension, one can define success more broadly so as to factor in public opinion. There are at least two ways of phrasing this:
Rather than assuming one uniform standard of rigor, we can think more broadly. Plan for the audience’s knowledge level and epistemic standards.
Alternatively, define one’s top-level goal as successful information transmission rather than merely intellectual rigor. Using the information-theoretic model, plan for the channel [1] and the audience’s decoding.
I’ll give three examples here:
For a place like LessWrong, aim high. Expect that people have enough knowledge (or can get up to speed) to engage substantively with the object-level details. As I understand it, we want (and have) a community where purely strategic behavior is discouraged and unhelpful, because we want to learn together to unpack the full decision graph relating to future scenarios. [2]
For other social media, think about your status there and plan based on your priorities. You might ask questions like: What do you want to say about IABIED? What mix of advocacy, promotion, clarification, agreement, disagreement are you aiming for? How will the channel change (amplify, distort, etc) your message? How will the audience perceive your comments?
For 1-to-1 in-person discussions, you might have more room for experimentation in choosing your message and style. You might try out different objectives. There is a time and place for being mindful of short inferential distances and therefore building a case slowly and deliberately. There is also a time/place for pushing on the Overton window. What does the “persuasion graph” look like for a particular person? Can you be ok with getting someone to agree with your conclusion even if they get there from a less rigorous direction? Even if that other path isn’t durable as the person gains more knowledge? (These are hard strategic questions.)
Personally, I am lucky that get to try out many face-to-face conversations with new people many times a week to see what happens. I am not following any survey methodology; this is more open-ended and exploratory so that I can get the contours.
[1]: Technical note: some think of an information-theoretic channel as only suffering from Gaussian noise, but that’s only one case. A channel can be any conditional probability distribution p(y|x) (output given input) and need not be memoryless. (Note that the notion of a conditional probability distribution generalizes over the notion of a function, which must be deterministic by definition.)
[2]: I’d like to see more directed-graph summaries of arguments on LessWrong. Here is one from 2012 by Dmytry titled A belief propagation graph (about AI Risk).
Updated on 2025-09-27.
Once upon a time, I read version of “why our kind can’t cooperate” , that was directed to secular people. I read it maybe decade ago, so I may misremember a lot of things, but that is what I remember:
there is important difference, in activism, that leads to the result that religious people win: they support actions even if they don’t agree with all things, while we don’t. secular organization will have people nitpick and disagree and then avoid contribution despite 90% agreement, while religious group will just call to act and have people act, even if they just 70% agree.
now i will say the important part is being Directionality Correct.
the organization that wrote this piece wasn’t thinking on things in Prisoner Dilemma terms, or Cooperation. all people and organizations here pursue their own goals.
and yet, this simple model looks like what happening now, to me. people concentrate about the 10% disagreement, instead of see 90% agreement and Directional Correctness and join the activism.
so, In My Model, game-theoretic cooperation is irrelevant to ability-to-cooperate. the point is that people set their threshold to joint the activism (the use of the word cooperate here may be confusing, as it reference to both joining someone on doing something and then do it together, and the game-theoretic concept) wrongly high—in a way that predictably results in group of people who have this threshold lose to group of people with lower threshold.
(I also don’t tend to see pointing out “you are using predictably losing tactic” as cudgel, but I also pretty immune to drowning child arguments, so i may be colorblind to some dynamic here.)