This is clearly one of the most important posts of 2024, so I’m giving it 9 points.
It accurately reduced my trust in Wikipedia
It gave me models behind it, specifically misuse of the “reliable sources” rule by a corrupt administrator and a good guess at his motives. This is a big claim but it’s adequately supported because the author “interviewed dozens of people”.
It explained why coverage of rationalist and EA topics was so bad on Wikipedia and especially on RationalWiki back in the day
It’s well-written, combining narrative and detail
The only negative (other than that it could read better to progressives) is it doesn’t seem to have had much impact on Wikipedia. When I pull up the Wikipedia page on LessWrong I find sections on Roko’s Basilisk and neoreaction and a link to TESCREAL, but not ideas rationalists actually like, that LW has become the main hub for AI safety discussion, that it’s run by Lightcone, or other objectively more important info ChatGPT could tell you.
This casts some doubt on the thesis, though I don’t know whether it’s because Gerard is still influential, because non-corrupt Wikipedia editors also think the negative aspersions are justified/informative, because the procedural issue of reliable sources and history of negative press dictate the article’s focus, or something else.
This is clearly one of the most important posts of 2024, so I’m giving it 9 points.
It accurately reduced my trust in Wikipedia
It gave me models behind it, specifically misuse of the “reliable sources” rule by a corrupt administrator and a good guess at his motives. This is a big claim but it’s adequately supported because the author “interviewed dozens of people”.
It explained why coverage of rationalist and EA topics was so bad on Wikipedia and especially on RationalWiki back in the day
It’s well-written, combining narrative and detail
The only negative (other than that it could read better to progressives) is it doesn’t seem to have had much impact on Wikipedia. When I pull up the Wikipedia page on LessWrong I find sections on Roko’s Basilisk and neoreaction and a link to TESCREAL, but not ideas rationalists actually like, that LW has become the main hub for AI safety discussion, that it’s run by Lightcone, or other objectively more important info ChatGPT could tell you.
This casts some doubt on the thesis, though I don’t know whether it’s because Gerard is still influential, because non-corrupt Wikipedia editors also think the negative aspersions are justified/informative, because the procedural issue of reliable sources and history of negative press dictate the article’s focus, or something else.