The Market for Lemons: Quality Uncertainty on Less Wrong

Tl;dr: Articles on LW are, if unchecked (for now by you), heavily distorting a useful view (yours) on what matters.

[This is (though in part only) a five-year update to Patrissimo’s article Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. However, I wrote most of this article before I became aware of its predecessor. Then again, this reinforces both our articles’ main critique.]

I claim that rational discussions in person, conferences, forums, social media, and blogs suffer from adverse selection and promote unwished-for phenomena such as the availability heuristic. Bluntly stated, they do (as all other discussions) have a tendency to support ever worse, unimportant, or wrong opinions and articles. More importantly, articles of high relevancy regarding some topics are conspicuously missing. This can be also observed on Less Wrong. It is not the purpose of this article to determine the exact extent of this problem. It shall merely bring to attention that “what you get is not what you should see.” However, I am afraid this effect is largely undervalued.

This result is by design and therefore to be expected. A rational agent will, by definition, post incorrect, incomplete, or not at all in the following instances:

  • Cost-benefit analysis: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect. Example: Have you seen articles by Mark Zuckerberg? But I also argue that for random John Doe the personal cost-benefit-analysis from posting an article is negative. Even more, the value of your time should approach infinity if you really drink the LW Kool-Aid, however, this shall be the topic of a subsequent article. I suspect the theme of this article may also be restated as a free-riding problem as it postulates the non-production or under-production of valuable articles and other contributions.

  • Conflicting with law: Topics like drugs (in the western world) and maybe politics or sexuality in other parts of the world are biased due to the risk of persecution, punishment, extortion, etc. And many topics such as in the spheres of rationality, transhumanism, effective altruism, are at least highly sensitive, especially when you continue arguing until you reach their moral extremes.

  • Inconvenience of disagreement: Due to the effort of posting truly anonymously (which currently requires a truly anonymous e-mail address and so forth), disagreeing posts will be avoided, particularly when the original poster is of high status and the risk to rub off on one’s other articles thus increased. This is obviously even truer for personal interactions. Side note: The reverse situation may also apply: more agreement (likes) with high status.

  • Dark knowledge: Even if I know how to acquire a sniper gun that cannot be traced, I will not share this knowledge (as for all other reasons, there are substantially better examples, but I do not want to make spreading dark knowledge a focus of this article).

  • Signaling: Seriously, would you discuss your affiliation to LW in a job interview?! Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point). LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”

  • WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western, educated, industrialized, rich, democratic, and male. The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world. LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we’re talking to other flawed human beings, and either or both of us might just be totally off base, even if we’re hanging around on a rationality discussion board.” LW could certainly use more diversity. Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as “Growing Mentally Stronger” which seemed to me intuitively highly counterproductive. I then asked my wife who is far more into fashion and not at all into LW. Her comment (Crocker’s warning): “They are great! You should definitely buy one for your son if you want him to go to high school and to be all for himself for the next couple of years; that is, except for the mobbing, maybe.”

  • Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors. This fact seems underappreciated. Think of SSC’s “What universal human experiences are you missing without realizing it?” Think of inferential distances and the typical mind fallacy. Think of slight changes in beliefs after drinking coffee, been working out, deeply in love for the first time/​seen your child born, being extremely hungry, wanting to and standing on the top of the mountain (especially Mt. Everest). Russell pointed out the interesting and strong effect of Schopenhauer’s and Nietzsche’s personal history on their misogyny. However, it would be a stretch to simply call them irrational. In every discussion, you have to start somewhere, but finding a starting point is a lot more difficult when the discussion partners are more diverse. All factors may not result in direct misinformation on LW but certainly shape the conversation (see also the next point).

  • Priorities: Specific “darlings” of the LW sphere such as Newcomb’s paradox or MW are regularly discussed. Just one moment of not paying bias attention, and you may assume they are really relevant. For those of us currently not programming FAI, they aren’t and steal attention from more important issues.

  • Other beliefs/​goals: Close to selfishness, but not quite the same. If an agent’s beliefs and goals differ from most others, the discussion would benefit from your post. Even so, that by itself may not be a sufficient reason for an agent to post. Example: Imagine somebody like Ben Goertzel. His beliefs on AI, for instance, differed from the mainstream on LW. This did not necessarily result in him posting an article on LW. And to my knowledge, he won’t, at least not directly. Plus, LW may try to slow him down as he seems less concerned about the F of FAI.

  • Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect. E.g. I do not want to signal myself that I belong to this tribe. This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma.” Again, if this sounds perfectly fine to you, that may be bad news.

  • Barriers to entry: Your first post requires creating an account. Karma that signals the quality of your post is still absent. An aspiring author may question the relative importance of his opinion (especially for highly complex topics), his understanding of the problem, the quality of his writing, and if his research on the chosen topic is sufficient.

  • Nothing new under the sun: Writing an article requires the bold assumption that its marginal utility is significantly above zero. The likelihood of which probably decreases with the number of posts, which is, as of now, quite impressive. Patrissimo‘s article (footnote [10]) addresses the same point, others mention being afraid of “reinventing the wheel.”

  • Error: I should point out that most of the reasons brought forward in this list talk about deliberate misinformation. In many cases, an article will just be wrong which the author does not realize. Examples: facts (the earth is flat), predications (planes cannot fly), and, seriously underestimated, horizon effects (if more information is provided the rational agent realizes that his action did not yield the desired outcome, e.g. ban of plastic bags).

  • Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders. See “is LW a c***” and Roko’s ***.” This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.

  • Topicality: This is a problem specific to LW. Many of the great posts as well as the sequences have originated about five to ten years ago. While the interest in AI has now reached mainstream awareness, the solid intellectual basis (centered around a few individuals) which LW offered seems to break away gradually and rationality topics experience their diaspora. What remains is a less balanced account of important topics in the sphere of rationality and new authors are discouraged to enter the conversation.

  • Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them.”

  • +Redundancy: If any of the above reasons apply, I may choose not to post. However, I also expect a rational agent with sufficiently close knowledge to attain the same knowledge himself so it is at the same time not absolutely necessary to post. An article will “only” speed up the time required to understand a new concept and reduce the likelihood of rationalists diverting due to disagreement (if Aumann is ignored) or faulty argumentation.

This list is not exhaustive. If you do not find a factor in this list that you expect to accounts for much of the effect, I will appreciate a hint in the comments.

There are a few outstanding examples pointing in the opposite direction. They appear to provide uncensored accounts of their way of thinking and take arguments to their logical extremes when necessary. Most notably Bostrom and Gwern, but then again, feel free to read the latter’s posts on endured extortion attempts.

A somewhat flippant conclusion (more in a FB than LW voice): After reading the article from 2010, I cannot expect this article (or the ones possibly following that have already been written) to have a serious impact. It thus can be concluded that it should not have been written. Then again, observing our own thinking patterns, we can identify influences of many thinkers who may have suspected the same (hubris not intended). And step by step, we will be standing on the shoulders of giants. At the same time, keep in mind that articles from LW won’t get you there. They represent only a small piece of the jigsaw. You may want to read some, observe how instrumental rationality works in the “real world,” and, finally, you have to draw the critical conclusions for yourself. Nobody truly rational will lay them out for you. LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X. But keep in mind that the instrumental rationality of your hairstylist may still surpass yours, and I don’t even need to say much about the one of your president, business leader, and club Casanova. And yet, they may be literally dead wrong, because they have overlooked AI and SENS.

A final personal note: Kudos to the giants for building this great website and starting point for rationalists and the real-life progress in the last couple of years! This is a rather skeptical article to start with, but it does have its specific purpose of laying out why I, and I suspect many others, almost refrained from posting.