(FWIW in this comment I am largely just repeating things already said in the longer thread… I wrote this mostly to clarify my own thinking.)
I think the conflict here is that, within intellectual online writing circles, attempting to use the title of a post to directly attempt to set a bottom line in the status of something is defecting on a norm, but this is not so in the ‘internet of beefs’ rest of the world, where titles are readily used as cudgels in status fights.
Within the intellectual online writing circles, this is not a good goal for a title, and it’s not something that AI 2027 did (or, like, something that ~any ACX post or ~any LW curated post does)[1]. This is not the same as “not putting your bottom line in the title”, it’s “don’t attempt to directly write the bottom line about the status of something in your title”.
I agree you’re narrowly correct that it’s acceptable to have goals for changing the status of various things, and it’s good to push back on implying that that isn’t allowed by any method. But I think Zvi did make the point that the title itself of the critique post attempted to do it using the title and that’s not something AI 2027 did and is IMO defecting on a worthy truce in the intellectual online circles.
In any case, it does seem LW curated posts and ACX posts both usually have neutral titles, especially given the occasionally contentious nature of their contents.
“Moldbug sold out” is definitely an attack on someone’s status. I still prefer it, because it makes a concrete claim about why. For instance, if the AI 2027 critique post title was “AI 2027′s Graphs Are Made Up And Unjustified” this would feel to me much better than something only about status like “AI 2027′s Timeline Forecasts Are Bad”.
For instance, if the AI 2027 critique post title was “AI 2027′s Graphs Are Made Up And Unjustified” this would feel to me much better than something only about status like “AI 2027′s Timeline Forecasts Are Bad”.
But then that wouldn’t be an accurate description of what titotal’s post is about.
“AI 2027′s authors’ arguments for superexponential growth curves are conceptually flawed, and their exponential model is neither exponential nor well-justified, and their graphs are made up and unjustified, and their projections don’t take into account many important variables, and benchmark+gaps is a worse model than the simplified one [for technical reasons], and these kinds of forecasts should be viewed with inherent skepticism for the following reasons” would be a proper summary of what is going on… but obviously it’s not suitable as a title.
I mean… the reason the AI 2027 critique isn’t titled “AI 2027′s Graphs Are Made Up And Unjustified” is obviously because the critique is about so much more than just some graphs on ACX and Twitter, right? That’s just one small part of the criticism, regardless of how much post-publication public discourse has focused on that one aspect.
The post is ultimately about why the timeline forecasts are (according to the author) bad, so it seems quite hard to title it something direct and concrete when it’s a compilation of many separate issues titotal has with AI 2027.
Hmm, interesting. I was surprised by the claim so I did look back through ACX and posts from the LW review, and it does seem to back up your claim (the closest I saw was “Sorry, I Still Think MR Is Wrong About USAID”, note I didn’t look very hard). EDIT: Actually I agree with sunwillrise that “Moldbug sold out” meets the bar (and in general my felt sense is that ACX does do this).
I’d dispute the characterization of this norm as operating “within intellectual online writing circles”. I think it’s a rationalist norm if anything. For example I went to Slow Boring and the sixth post title is “Tema Okun’s “White Supremacy Culture” work is bad”.
This norm seems like it both (1) creates incentives against outside critique and (2) lessens the extremes of a bad thing (e.g. like a norm that even if you have fistfights you won’t use knives). I think on balance I support it but still feel pretty meh about its application in this case. Still, this did change my mind somewhat, thanks.
I am both surprised and glad my comment led to an update :)
FWIW I never expect the political blogs to be playing by the good rules of the rest of the intellectual writing circles, I view them more as soldiers. Not centralexamples of soldiers, but enough so that I’d repeatedly be disappointed by them if I expected them to hold themselves to the same standards.
(As an example, in my mind I confidently-but-vaguely recall some Matt Yglesias tweets where he endorsed dishonesty for his side of the political on some meta-level, in order to win political conflicts; interested if anyone else recalls this / has a link.)
(If this also doesn’t count as “intellectual writing circles”, consider renaming your category, since I clearly do not understand what you mean, except inasmuch as it is “rationalist or rationalist-adjacent circles”.)
The Gelman post in question is importantly not about arguing for the linked post being bad/stupid, it was taking it fully as a given. I actually think that’s an importantly different dynamic because if you are in a context where you can actually presume with your audience that something is bad, then writing it in a title isn’t actually influencing the status landscape very much (though it’s tricky).
Similarly, I think on LessWrong writing a title which presumes the falsity of the existence of a christian god would in other contexts I think be a pretty bad thing to do, but on LessWrong be totally fine, for similar reasons.
As an example, in my mind I confidently-but-vaguely recall some Matt Yglesias tweets where he endorsed dishonesty for his side of the political on some meta-level, in order to win political conflicts; interested if anyone else recalls this / has a link
I won’t say I would necessarily be surprised, per se, if he had written something along these lines, at least on Twitter, but as as general matter Matt believes Misinformation mostly confuses your own side, where he wrote:
My bottom line on this is that saying things that are true is underrated and saying things that are false is overrated.
We’re all acutely aware of the false or misleading things our political opponents say, and it’s easy to convince yourself in the spirit of “turnabout is fair play” that the key to victory is to play dirty, too. The real problem, though, is that not only does your side already say more false and misleading things than you’d like to admit, but they are almost certainly saying more false and misleading things than you realize. That’s because your side is much better at misleading you than they are at misleading people outside of your ideological camp, and this kind of own-team deception creates huge tactical and strategic problems.
I do believe Matt’s support of truth-telling in political fights is instrumental rather than a terminal value for him, so perhaps him articulating this is what you were thinking of?
(FWIW in this comment I am largely just repeating things already said in the longer thread… I wrote this mostly to clarify my own thinking.)
I think the conflict here is that, within intellectual online writing circles, attempting to use the title of a post to directly attempt to set a bottom line in the status of something is defecting on a norm, but this is not so in the ‘internet of beefs’ rest of the world, where titles are readily used as cudgels in status fights.
Within the intellectual online writing circles, this is not a good goal for a title, and it’s not something that AI 2027 did (or, like, something that ~any ACX post or ~any LW curated post does)[1]. This is not the same as “not putting your bottom line in the title”, it’s “don’t attempt to directly write the bottom line about the status of something in your title”.
I agree you’re narrowly correct that it’s acceptable to have goals for changing the status of various things, and it’s good to push back on implying that that isn’t allowed by any method. But I think Zvi did make the point that the title itself of the critique post attempted to do it using the title and that’s not something AI 2027 did and is IMO defecting on a worthy truce in the intellectual online circles.
To the best of my recollection. Can anyone think of counterexamples?
It’s difficult to determine what you would or wouldn’t call “directly writ[ing] the bottom line about the status of something in your title.”
titotal’s post was titled “A deep critique of AI 2027’s bad timeline models.” Is that more or less about the status of the bottom line than “Futarchy’s fundamental flaw” is? What about “Moldbug sold out” over on ACX?
In any case, it does seem LW curated posts and ACX posts both usually have neutral titles, especially given the occasionally contentious nature of their contents.
“Moldbug sold out” is definitely an attack on someone’s status. I still prefer it, because it makes a concrete claim about why. For instance, if the AI 2027 critique post title was “AI 2027′s Graphs Are Made Up And Unjustified” this would feel to me much better than something only about status like “AI 2027′s Timeline Forecasts Are Bad”.
Added: I searched through a bunch of ACX archives specifically for the word ‘bad’ in titles, I think both titles make a substantive claim about what is bad (Bad Definitions Of “Democracy” And “Accountability” Shade Into Totalitarianism and Perhaps It Is A Bad Thing That The World’s Leading AI Companies Cannot Control Their AIs, the latter of which is slightly sarcastic while making the object level claim that the AI companies cannot control their AIs).
Added2: It was easier to search the complete SSC history for ‘bad’. The examples are Bad Dreams, How Bad Are Things?, Asymmetric Weapons Gone Bad, and Response To Comments: The Tax Bill Is Still Very Bad, which was the sequel to The Tax Bill Compared To Other Very Expensive Things. The last one is the only one similar to what we’re discussing here, but in-context it is said in response to his commenters and as a sequel to a post which did a substantive thing, the title was not the primary thesis for the rest of the internet, which again seems different to me.
But then that wouldn’t be an accurate description of what titotal’s post is about.
“AI 2027′s authors’ arguments for superexponential growth curves are conceptually flawed, and their exponential model is neither exponential nor well-justified, and their graphs are made up and unjustified, and their projections don’t take into account many important variables, and benchmark+gaps is a worse model than the simplified one [for technical reasons], and these kinds of forecasts should be viewed with inherent skepticism for the following reasons” would be a proper summary of what is going on… but obviously it’s not suitable as a title.
I mean… the reason the AI 2027 critique isn’t titled “AI 2027′s Graphs Are Made Up And Unjustified” is obviously because the critique is about so much more than just some graphs on ACX and Twitter, right? That’s just one small part of the criticism, regardless of how much post-publication public discourse has focused on that one aspect.
The post is ultimately about why the timeline forecasts are (according to the author) bad, so it seems quite hard to title it something direct and concrete when it’s a compilation of many separate issues titotal has with AI 2027.
Hmm, interesting. I was surprised by the claim so I did look back through ACX and posts from the LW review, and it does seem to back up your claim (the closest I saw was “Sorry, I Still Think MR Is Wrong About USAID”, note I didn’t look very hard). EDIT: Actually I agree with sunwillrise that “Moldbug sold out” meets the bar (and in general my felt sense is that ACX does do this).
I’d dispute the characterization of this norm as operating “within intellectual online writing circles”. I think it’s a rationalist norm if anything. For example I went to Slow Boring and the sixth post title is “Tema Okun’s “White Supremacy Culture” work is bad”.
This norm seems like it both (1) creates incentives against outside critique and (2) lessens the extremes of a bad thing (e.g. like a norm that even if you have fistfights you won’t use knives). I think on balance I support it but still feel pretty meh about its application in this case. Still, this did change my mind somewhat, thanks.
I am both surprised and glad my comment led to an update :)
FWIW I never expect the political blogs to be playing by the good rules of the rest of the intellectual writing circles, I view them more as soldiers. Not central examples of soldiers, but enough so that I’d repeatedly be disappointed by them if I expected them to hold themselves to the same standards.
(As an example, in my mind I confidently-but-vaguely recall some Matt Yglesias tweets where he endorsed dishonesty for his side of the political on some meta-level, in order to win political conflicts; interested if anyone else recalls this / has a link.)
Andrew Gelman: “Bring on the Stupid: When does it make sense to judge a person, a group, or an organization by its worst?” (Not quite as clearcut, since it doesn’t name the person in the title, but still)
(If this also doesn’t count as “intellectual writing circles”, consider renaming your category, since I clearly do not understand what you mean, except inasmuch as it is “rationalist or rationalist-adjacent circles”.)
I certainly consider Gelman a valid example of the category :)
The Gelman post in question is importantly not about arguing for the linked post being bad/stupid, it was taking it fully as a given. I actually think that’s an importantly different dynamic because if you are in a context where you can actually presume with your audience that something is bad, then writing it in a title isn’t actually influencing the status landscape very much (though it’s tricky).
Similarly, I think on LessWrong writing a title which presumes the falsity of the existence of a christian god would in other contexts I think be a pretty bad thing to do, but on LessWrong be totally fine, for similar reasons.
I won’t say I would necessarily be surprised, per se, if he had written something along these lines, at least on Twitter, but as as general matter Matt believes Misinformation mostly confuses your own side, where he wrote:
I do believe Matt’s support of truth-telling in political fights is instrumental rather than a terminal value for him, so perhaps him articulating this is what you were thinking of?