This seems wise. The reception of the book in the community has been rather Why Our Kind Can’t Cooperate, as someone whom I forget linked. The addiction to hashing-out-object-level-correctness-on-every-point-of-factual-disagreement and insistence on “everything must be simulacrum level 0 all the time”… well, it’s not particularly conducive to getting things done in the real world.
I’m not suggesting we become propagandists, but I think pretty much every x-risk-worried Rat who disliked the book because e.g. the evolution analogy doesn’t work, they would have preferred a different flavor of sci-fi story, or the book should have been longer, or it should have been shorter, or it should have proposed my favorite secret plan for averting doom, or it should have contained draft legislation at the back… if they would endorse such a statement, I think that (metaphorically) there should be an all-caps disclaimer that reads something like “TO BE CLEAR AI IS STILL ON TRACK TO KILL EVERYONE YOU LOVE; YOU SHOULD BE ALARMED ABOUT THIS AND TELLING PEOPLE IN NO UNCERTAIN TERMS THAT YOU HAVE FAR, FAR MORE IN COMMON WITH YUDKOWSKY AND SOARES THAN YOU DO WITH THE LOBBYISTS OF META, WHO ABSENT COORDINATION BY PEOPLE ON HUMANITY’S SIDE ARE LIABLE TO WIN THIS FIGHT, SO COORDINATE WE MUST” every couple of paragraphs.
I don’t mean to say that the time for words and analysis is over. It isn’t. But the time for action has begun, and words are a form of action. That’s what’s missing, is the words-of-action. It’s a missing mood. Parable (which, yes, I have learned some people find really annoying):
A pale, frightened prisoner of war returns to the barracks, where he tells his friend: “Hey man, I heard the guards talking, and I think they’re gonna take us out, make us a dig a ditch, and then shoot us in the back. This will happen at dawn on Thursday.”
The friend snorts, “Why would they shoot us in the back? That’s incredibly stupid. Obviously they’ll shoot us in the head; it’s more reliable. And do they really need for us to dig a ditch first? I think they’ll just leave us to the jackals. Besides, the Thursday thing seems over-confident. Plans change around here, and it seems more logical for it to happen right before the new round of prisoners comes in, which is typically Saturday, so they could reasonably shoot us Friday. Are you sure you heard Thursday?”
The second prisoner is making some good points. He is also, obviously, off his rocker.
There are two steelmen I can think of here. One is “We must never abandon this relentless commitment to precise truth. All we say, whether to each other or to the outside world, must be thoroughly vetted for its precise truthfulness.” To which my reply is: how’s that been working out for us so far? Again, I don’t suggest we turn to outright lying like David Sacks, Perry Metzger, Sam Altman, and all the other rogues. But would it kill us to be the least bit strategic or rhetorical? Politics is the mind-killer, sure. But ASI is the planet-killer, and politics is the ASI-[possibility-thereof-]killer, so I am willing to let my mind take a few stray bullets.
The second is “No, the problems I have with the book are things that will critically undermine its rhetorical effectiveness. I know the heart of the median American voter, and she’s really gonna hate this evolution analogy.” To which I say, “This may be so. The confidence and negativity with which you have expressed this disagreement are wholly unwarranted.”
Let’s win, y’all. We can win without sacrificing style and integrity. It might require everyone to sacrifice a bit of personal pride, a bit of delight-in-one’s-own-cleverness. I’m not saying keep objections to yourself. I am saying, keep your eye on the fucking ball. The ball is not “being right,” the ball is survival.
I was pushing back on a similar attitude yesterday on twitter → LINK.
Basically, I’m in favor of people having nitpicky high-decoupling discussion on lesswrong, and meanwhile doing rah rah activism action PR stuff on twitter and bluesky and facebook and intelligence.org and pauseai.info and op-eds and basically the entire rest of the internet and world. Just one website of carve-out. I don’t think this is asking too much!
Yeah, I agree. The audience for this book isn’t LessWrong, but lots of people seem to be acting as if pushing back on LessWrong is a defection that will hurt the book’s prospects.
I’m in the Twitter thread with Steve. I’ll just note that I don’t think it’s realistic to expect the world’s reaction to be more passionate and supportive than the LW community’s signaled reaction.
Are most of the persnickety internal-disagreers actually signalling that they intend to promote the book, or at least not downplay its thesis? I don’t think rationalists at large have a great track record of engaging the outside world on a unified front, or in leaving nuance aside when the nuance would stand in the way of the important parts of the communication. In other words, I don’t think the two types of content are on different platforms. I think it’s usually the same content on both.
In general, I’ve noticed that a lot of people think “scout mindset” means never having to pick up a (metaphorical) rifle. That’s a good way to have a precise model of how you’re going to die, without having any hand in preventing it. The most useful people in the world right now are scouts who are willing to act like soldiers from time to time.
Are most of the persnickety internal-disagreers actually signalling that they intend to promote the book, or at least not downplay its thesis?
One of the persnickety internal disagreers here. I have recommended IABIED to those of my acquaintances who I expect may read it. I don’t really have any other platform to shout about it from, but if I did, I would’ve certainly used it to promote the book, leaving all nitpicking out of it.
I, at least, do explicitly make a distinction between “a place for persnickety internal discussion” and “the public-facing platform”, and would behave differently between the two.
If you think it’s nonsense, please read it! Because logically:
1. It is currently the NYT bestseller: #7 book. Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it. 2. It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.
So even if you think it is nonsense, do you really want people in one echo chamber to think it is the simple truth acknowledged by experts, and people in another echo chamber think it is nonsense not even worth debunking? The only thing the two sides agree on is that the answer is so obvious it’s not even worth listening to the other side.
Do not let that happen to such an important question under your nose. Make the effort to find out WHY you disagree so intensely with so many smart people!
Especially, if you are a wonderful person who often tries to make the world a better place!
(PS: My personal book review is: the book was preaching to the choir for myself haha, but it was still very interesting to read the history stories, and the occasional humor was well done. I don’t fully agree with solutions in the last chapters, but they feel relatively saner than many other solutions I’ve read about from others in the field.)
On LessWrong I didn’t nitpick this book in particular, but I’ve consistently disagreed with some MIRI positions (e.g. they think it’s futile trying to increase AI alignment spending beyond 0.1% of AI capabilities spending, since the hope that alignment will happen first is completely negligible unless we shut down capabilities).
In principle it makes sense. But in reality right now, the only place where there’s a sizable MIRI-aligned community, is the community that’s entirely going the persnickety route. I’m open to different counterfactual comparisons, I’m just noting that compared to the world where there’s a sizable MIRI-aligned community that shows support for MIRI, this world is disappointing.
LessWrong is not an activist community, and should not become one. I think there are some promising arguments for trying to create activist spaces and communities (as well as some substantially valid warnings). I am currently kind of confused about how good it would be to create more of those spaces, but I think if it’s a good idea, people should not attempt to try to make LessWrong into one.
I don’t see “how you express yourself on a highly argumentative web forum” as limiting “how you express yourself at a launch party” or “how you express yourself on a popular podcast” other places.
One is “We must never abandon this relentless commitment to precise truth. All we say, whether to each other or to the outside world, must be thoroughly vetted for its precise truthfulness.” To which my reply is: how’s that been working out for us so far?
[...]
We can win without sacrificing style and integrity.
But you just did propose sacrificing our integrity: specifically, the integrity of our relentless commitment to precise truth. It was two paragraphs ago. The text is right there. We can see it. Do you expect us not to notice?
To be clear, in this comment, I’m not even arguing that you’re wrong. Given the situation, maybe sacrificing the integrity of our relentless commitment to precise truth is exactly what’s needed!
But you can’t seriously expect people not to notice, right? You are including the costs of people noticing as part of your consequentialist decision calculus, right?
No, I just expressed myself badly. Thanks for keeping me honest. Let me try to rephrase—in response to any text, you can write ~arbitrarily many words in reply that lay out exactly where it was wrong. You can also write ~arbitrarily many words in reply that lay out where it was right. You can vary not only the quantity but the stridency/emphasis of these collections of words. (I’m only talking simulacrum-0 stuff here.) This is no canonical weighting of these!! You have to choose. The choice is not determined by your commitment to speaking truth. The choice is determined by priorities about how your words move others’ minds and move the world. Does that make more sense?
‘Speak only truth’ is underconstrained; we’ve allowed ourselves to add (charitably) ‘and speak all the truth that your fingers have the strength to type, particularly on topics about which there appears to be disagreement’ or (uncharitably) ‘and cultivate the aesthetic of a discerning, cantankerous, genius critic’ in order to get lower-dimensional solutions.
When constraints don’t eliminate all dimensions, I think you can reasonably have lexically ordered preferences. We’ve picked a good first priority (speak only truth), but have picked a counterproductive second priority ([however you want to describe it]). I claim our second priority should be something like “and accomplish your goals.” Where your goals, presumably, = survive.
OK, I am rereading what I wrote last night and I see that I really expressed myself badly. It really does sound like I said we shoudl sacrifice our commitment to precise truth. I’ll try again: what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff. where those practices include things like “never appearing to ‘rally round anything’ in a tribal fashion.” Or, at a 20degree angle from that: “doing rhetoric not with an aim toward an external goal, but orienting our rhetoric to be ostentatious in our lack of rhetoric, making all the trappings of our speech scream ‘this is a scrupulous, obsessive, nonpartisan autist for the truth.’” Does that make more sense? it’s the performative elements that get my goat. (And yes, there are performative elements, unavoidably! All speech has rhetoric because (metaphorically) “the semantic dimensions” are a subspace of speech-space, and speech-space is affine, so there’s no way to “set the non-semantic dimensions to zero.”)
This is important enough that you should clarify in your own words. Raymond Arnold, as a moderator of lesswrong.com, is it in fact your position that “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”?
The word and actual connotations of anal-retentive are important to my sentence. (Also, I said “this feels righter-to-me” not “this is right” and I definitely did not make an explicit defense of exactly this wording as aspirational policy)
We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff.
Some of those practices should include tracking various metaphorical forests-vs-trees, and being some-kind-of-intentional about what things are worth arguing in what sort of ways. (This does not come with any particular opinion about what sort of ways are worth arguing what sort of things, just, that there exist at least some patterns of nerdy pedantry that do not automatically get to be treated as actively good parts of a good truthseeking culture)
(I think this was fairly obvious and that you are indeed being kind of obnoxious so I have strong downvoted you in this instance)
You replied to a comment that said, verbatim, “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”, with, “This paragraph feels righter-to-me”.
That response does prompt the reader to wonder whether you believe the quoted statement by Malcolm McLeod, which was a prominent thesis sentence of the comment that you were endorsing as feeling righter-to-you! I understand that “This feels righter-to-me” does not mean the same thing as “This is right.” That’s why I asked you to clarify!
In your clarification, you have now disavowed the quoted statement with your own statement that “We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff.”
I think that (metaphorically) there should be an all-caps disclaimer that reads something like “TO BE CLEAR AI IS STILL ON TRACK TO KILL EVERYONE YOU LOVE; YOU SHOULD BE ALARMED ABOUT THIS AND TELLING PEOPLE IN NO UNCERTAIN TERMS THAT YOU HAVE FAR, FAR MORE IN COMMON WITH YUDKOWSKY AND SOARES THAN YOU DO WITH THE LOBBYISTS OF META, WHO ABSENT COORDINATION BY PEOPLE ON HUMANITY’S SIDE ARE LIABLE TO WIN THIS FIGHT, SO COORDINATE WE MUST” every couple of paragraphs.
Yeah, I kind of regret not prefacing my pseudo-review with something like this. I was generally writing it from the mindset of “obviously the book is entirely correct and I’m only reviewing the presentation”, and my assumption was that trying to “sell it” to LW users was preaching to the choir (I would’ve strongly endorsed it if I had a big mainstream audience, or even if I were making a top-level LW post). But that does feel like part of the our-kind-can’t-cooperate pattern now.
Politics is the mind-killer, sure. But ASI is the planet-killer, and politics is the ASI-[possibility-thereof-]killer, so I am willing to let my mind take a few stray bullets.
This is an absolutely fantastic phrasing/framing.
I’ll say (as a guy who just wrote a very pro-book post) that this vibe feels off to me. (I’m not sure if any particular sentence seems definitely wrong, but, it feels like it’s coming from a generator that I think is wrong)
I think Eliezer/Nate were deliberately not attempting to make the book some kind of broad thing the whole community could rally behind. They might have done so, but, they didn’t. So, complaining about “why our kind can’t cooperate” doesn’t actually feel right to me in this instance.
(I think there’s some kind of subtle “why we can’t cooperate” thing that is still relevant, but, it’s less like “YOU SHOULD ALL BE COOPERATING” and more like “some people should notice that something is weird about the way they’re sort of… ostentatiously not cooperating?”. Where I’m not so much frustrated at them “not cooperating,” more frustrated at the weirdness of the dynamics around the ostentatiousness. (This sentence still isn’t quite right, but, I’mma leave it there for no)
I think Eliezer/Nate were deliberately not attempting to make the book some kind of broad thing the whole community could rally behind. They might have done so, but, they didn’t.
IMO they missed their opportunity and now LW is missing its/our opportunity, and either side naturally thinks it’s more the other’s fault.
This seems wise. The reception of the book in the community has been rather Why Our Kind Can’t Cooperate, as someone whom I forget linked. The addiction to hashing-out-object-level-correctness-on-every-point-of-factual-disagreement and insistence on “everything must be simulacrum level 0 all the time”… well, it’s not particularly conducive to getting things done in the real world.
I’m not suggesting we become propagandists, but I think pretty much every x-risk-worried Rat who disliked the book because e.g. the evolution analogy doesn’t work, they would have preferred a different flavor of sci-fi story, or the book should have been longer, or it should have been shorter, or it should have proposed my favorite secret plan for averting doom, or it should have contained draft legislation at the back… if they would endorse such a statement, I think that (metaphorically) there should be an all-caps disclaimer that reads something like “TO BE CLEAR AI IS STILL ON TRACK TO KILL EVERYONE YOU LOVE; YOU SHOULD BE ALARMED ABOUT THIS AND TELLING PEOPLE IN NO UNCERTAIN TERMS THAT YOU HAVE FAR, FAR MORE IN COMMON WITH YUDKOWSKY AND SOARES THAN YOU DO WITH THE LOBBYISTS OF META, WHO ABSENT COORDINATION BY PEOPLE ON HUMANITY’S SIDE ARE LIABLE TO WIN THIS FIGHT, SO COORDINATE WE MUST” every couple of paragraphs.
I don’t mean to say that the time for words and analysis is over. It isn’t. But the time for action has begun, and words are a form of action. That’s what’s missing, is the words-of-action. It’s a missing mood. Parable (which, yes, I have learned some people find really annoying):
A pale, frightened prisoner of war returns to the barracks, where he tells his friend: “Hey man, I heard the guards talking, and I think they’re gonna take us out, make us a dig a ditch, and then shoot us in the back. This will happen at dawn on Thursday.”
The friend snorts, “Why would they shoot us in the back? That’s incredibly stupid. Obviously they’ll shoot us in the head; it’s more reliable. And do they really need for us to dig a ditch first? I think they’ll just leave us to the jackals. Besides, the Thursday thing seems over-confident. Plans change around here, and it seems more logical for it to happen right before the new round of prisoners comes in, which is typically Saturday, so they could reasonably shoot us Friday. Are you sure you heard Thursday?”
The second prisoner is making some good points. He is also, obviously, off his rocker.
There are two steelmen I can think of here. One is “We must never abandon this relentless commitment to precise truth. All we say, whether to each other or to the outside world, must be thoroughly vetted for its precise truthfulness.” To which my reply is: how’s that been working out for us so far? Again, I don’t suggest we turn to outright lying like David Sacks, Perry Metzger, Sam Altman, and all the other rogues. But would it kill us to be the least bit strategic or rhetorical? Politics is the mind-killer, sure. But ASI is the planet-killer, and politics is the ASI-[possibility-thereof-]killer, so I am willing to let my mind take a few stray bullets.
The second is “No, the problems I have with the book are things that will critically undermine its rhetorical effectiveness. I know the heart of the median American voter, and she’s really gonna hate this evolution analogy.” To which I say, “This may be so. The confidence and negativity with which you have expressed this disagreement are wholly unwarranted.”
Let’s win, y’all. We can win without sacrificing style and integrity. It might require everyone to sacrifice a bit of personal pride, a bit of delight-in-one’s-own-cleverness. I’m not saying keep objections to yourself. I am saying, keep your eye on the fucking ball. The ball is not “being right,” the ball is survival.
I was pushing back on a similar attitude yesterday on twitter → LINK.
Basically, I’m in favor of people having nitpicky high-decoupling discussion on lesswrong, and meanwhile doing rah rah activism action PR stuff on twitter and bluesky and facebook and intelligence.org and pauseai.info and op-eds and basically the entire rest of the internet and world. Just one website of carve-out. I don’t think this is asking too much!
Yeah, I agree. The audience for this book isn’t LessWrong, but lots of people seem to be acting as if pushing back on LessWrong is a defection that will hurt the book’s prospects.
That’s fair!
I’m in the Twitter thread with Steve. I’ll just note that I don’t think it’s realistic to expect the world’s reaction to be more passionate and supportive than the LW community’s signaled reaction.
Why not? It seems extremely reasonable to have a place for persnickety internal-ish discussion, and other content somewhere else?
Are most of the persnickety internal-disagreers actually signalling that they intend to promote the book, or at least not downplay its thesis? I don’t think rationalists at large have a great track record of engaging the outside world on a unified front, or in leaving nuance aside when the nuance would stand in the way of the important parts of the communication. In other words, I don’t think the two types of content are on different platforms. I think it’s usually the same content on both.
In general, I’ve noticed that a lot of people think “scout mindset” means never having to pick up a (metaphorical) rifle. That’s a good way to have a precise model of how you’re going to die, without having any hand in preventing it. The most useful people in the world right now are scouts who are willing to act like soldiers from time to time.
One of the persnickety internal disagreers here. I have recommended IABIED to those of my acquaintances who I expect may read it. I don’t really have any other platform to shout about it from, but if I did, I would’ve certainly used it to promote the book, leaving all nitpicking out of it.
I, at least, do explicitly make a distinction between “a place for persnickety internal discussion” and “the public-facing platform”, and would behave differently between the two.
I gave it a good review on Goodreads haha.
The review
If you think it’s nonsense, please read it! Because logically:
1. It is currently the NYT bestseller: #7 book. Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it.
2. It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.
So even if you think it is nonsense, do you really want people in one echo chamber to think it is the simple truth acknowledged by experts, and people in another echo chamber think it is nonsense not even worth debunking? The only thing the two sides agree on is that the answer is so obvious it’s not even worth listening to the other side.
Do not let that happen to such an important question under your nose. Make the effort to find out WHY you disagree so intensely with so many smart people!
Especially, if you are a wonderful person who often tries to make the world a better place!
(PS: My personal book review is: the book was preaching to the choir for myself haha, but it was still very interesting to read the history stories, and the occasional humor was well done. I don’t fully agree with solutions in the last chapters, but they feel relatively saner than many other solutions I’ve read about from others in the field.)
On LessWrong I didn’t nitpick this book in particular, but I’ve consistently disagreed with some MIRI positions (e.g. they think it’s futile trying to increase AI alignment spending beyond 0.1% of AI capabilities spending, since the hope that alignment will happen first is completely negligible unless we shut down capabilities).
In principle it makes sense. But in reality right now, the only place where there’s a sizable MIRI-aligned community, is the community that’s entirely going the persnickety route. I’m open to different counterfactual comparisons, I’m just noting that compared to the world where there’s a sizable MIRI-aligned community that shows support for MIRI, this world is disappointing.
LessWrong is not an activist community, and should not become one. I think there are some promising arguments for trying to create activist spaces and communities (as well as some substantially valid warnings). I am currently kind of confused about how good it would be to create more of those spaces, but I think if it’s a good idea, people should not attempt to try to make LessWrong into one.
I don’t see “how you express yourself on a highly argumentative web forum” as limiting “how you express yourself at a launch party” or “how you express yourself on a popular podcast” other places.
But you just did propose sacrificing our integrity: specifically, the integrity of our relentless commitment to precise truth. It was two paragraphs ago. The text is right there. We can see it. Do you expect us not to notice?
To be clear, in this comment, I’m not even arguing that you’re wrong. Given the situation, maybe sacrificing the integrity of our relentless commitment to precise truth is exactly what’s needed!
But you can’t seriously expect people not to notice, right? You are including the costs of people noticing as part of your consequentialist decision calculus, right?
No, I just expressed myself badly. Thanks for keeping me honest. Let me try to rephrase—in response to any text, you can write ~arbitrarily many words in reply that lay out exactly where it was wrong. You can also write ~arbitrarily many words in reply that lay out where it was right. You can vary not only the quantity but the stridency/emphasis of these collections of words. (I’m only talking simulacrum-0 stuff here.) This is no canonical weighting of these!! You have to choose. The choice is not determined by your commitment to speaking truth. The choice is determined by priorities about how your words move others’ minds and move the world. Does that make more sense?
‘Speak only truth’ is underconstrained; we’ve allowed ourselves to add (charitably) ‘and speak all the truth that your fingers have the strength to type, particularly on topics about which there appears to be disagreement’ or (uncharitably) ‘and cultivate the aesthetic of a discerning, cantankerous, genius critic’ in order to get lower-dimensional solutions.
When constraints don’t eliminate all dimensions, I think you can reasonably have lexically ordered preferences. We’ve picked a good first priority (speak only truth), but have picked a counterproductive second priority ([however you want to describe it]). I claim our second priority should be something like “and accomplish your goals.” Where your goals, presumably, = survive.
OK, I am rereading what I wrote last night and I see that I really expressed myself badly. It really does sound like I said we shoudl sacrifice our commitment to precise truth. I’ll try again: what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff. where those practices include things like “never appearing to ‘rally round anything’ in a tribal fashion.” Or, at a 20degree angle from that: “doing rhetoric not with an aim toward an external goal, but orienting our rhetoric to be ostentatious in our lack of rhetoric, making all the trappings of our speech scream ‘this is a scrupulous, obsessive, nonpartisan autist for the truth.’” Does that make more sense? it’s the performative elements that get my goat. (And yes, there are performative elements, unavoidably! All speech has rhetoric because (metaphorically) “the semantic dimensions” are a subspace of speech-space, and speech-space is affine, so there’s no way to “set the non-semantic dimensions to zero.”)
This paragraph feels righter-to-me (oh, huh, you even ended up with the same word “ostentatious” as pointer that I did in my comment-1-minute-ago)
This is important enough that you should clarify in your own words. Raymond Arnold, as a moderator of lesswrong.com, is it in fact your position that “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”?
The word and actual connotations of anal-retentive are important to my sentence. (Also, I said “this feels righter-to-me” not “this is right” and I definitely did not make an explicit defense of exactly this wording as aspirational policy)
We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff.
Some of those practices should include tracking various metaphorical forests-vs-trees, and being some-kind-of-intentional about what things are worth arguing in what sort of ways. (This does not come with any particular opinion about what sort of ways are worth arguing what sort of things, just, that there exist at least some patterns of nerdy pedantry that do not automatically get to be treated as actively good parts of a good truthseeking culture)
(I think this was fairly obvious and that you are indeed being kind of obnoxious so I have strong downvoted you in this instance)
Thank you for clarifying.
No, it was not obvious!
You replied to a comment that said, verbatim, “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”, with, “This paragraph feels righter-to-me”.
That response does prompt the reader to wonder whether you believe the quoted statement by Malcolm McLeod, which was a prominent thesis sentence of the comment that you were endorsing as feeling righter-to-you! I understand that “This feels righter-to-me” does not mean the same thing as “This is right.” That’s why I asked you to clarify!
In your clarification, you have now disavowed the quoted statement with your own statement that “We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff.”
I emphatically agree with your statement for the reasons I explained at length in such posts as “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” and “Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists”, but I don’t think the matter is “fairly obvious.” If it were, I wouldn’t have had to write thousands of words about it.
Yeah, I kind of regret not prefacing my pseudo-review with something like this. I was generally writing it from the mindset of “obviously the book is entirely correct and I’m only reviewing the presentation”, and my assumption was that trying to “sell it” to LW users was preaching to the choir (I would’ve strongly endorsed it if I had a big mainstream audience, or even if I were making a top-level LW post). But that does feel like part of the our-kind-can’t-cooperate pattern now.
This is an absolutely fantastic phrasing/framing.
I’ll say (as a guy who just wrote a very pro-book post) that this vibe feels off to me. (I’m not sure if any particular sentence seems definitely wrong, but, it feels like it’s coming from a generator that I think is wrong)
I think Eliezer/Nate were deliberately not attempting to make the book some kind of broad thing the whole community could rally behind. They might have done so, but, they didn’t. So, complaining about “why our kind can’t cooperate” doesn’t actually feel right to me in this instance.
(I think there’s some kind of subtle “why we can’t cooperate” thing that is still relevant, but, it’s less like “YOU SHOULD ALL BE COOPERATING” and more like “some people should notice that something is weird about the way they’re sort of… ostentatiously not cooperating?”. Where I’m not so much frustrated at them “not cooperating,” more frustrated at the weirdness of the dynamics around the ostentatiousness. (This sentence still isn’t quite right, but, I’mma leave it there for no)
IMO they missed their opportunity and now LW is missing its/our opportunity, and either side naturally thinks it’s more the other’s fault.
Keep in mind propagandizing it is also an easy way to get political polarization.