dbohdan
Does it make sense that LW voting arrows are arranged the way they are? This is how they look right now:
username 12h ▾ 1 ▴ ✘ 5 ✔
My intuition protests, and I think I know why. Upvote and downvote arrows are usually stacked vertically, with the upvote arrow on top. When you translate vertical to left-to-right text, you get what was above to the left and what was below to the right. It means the following horizontal arrangement:
username 12h ▴ 1 ▾ ✔ 5 ✘
What skills would be transferable for the planning stages of all three examples?
The baseline planning skill is having a start-to-end plan at all as opposed to winging it or only thinking ahead in an ad hoc manner. One step beyond this is writing the plan down, perhaps as a checklist. You can use the written copy to keep track of where you are, refine the plan, and simply to not forget it.
A step beyond, which seems rarer and less automatic for people than the previous, is to employ any kind of what they call a “work breakdown structure”: a systematic mapping from higher-level steps (“find out the legal requirements for filming a car chase”) to lower-level steps (“ask indie filmmaker chat what legal firm they recommend”).
This, um, dramatically changes the picture. It could be nothing.
As a heavy user of the Internet, I didn’t recognize this copypasta. My mistake was only googling a large chunk in double quotes.
Edit: “Dramatically” is intended as a pun on “drama”, hence the italics. I think the new information changes the picture significantly, and yet the bio remains a red flag.
I don’t broadly approve of trying to diagnose people over the Internet, nor am I qualified to, but it’s striking how much the “i love mind games” bio suggests borderline personality disorder. It has chronic feelings of emptiness (“i have no passions or goals in life.”), instability in interpersonal relationships (“i love mind games, i love drama, i love fake people.”, “i would not hesitate to betray any of my loved ones at any moment.”), negative self-image (“[...] really no reason for anyone to be around me.”), and so on.
If you are dating and this bio doesn’t make your HUD light up bright red, you are in danger. Read up on personality disorders so you can make more informed decisions about people you are getting involved with.
I like to think of these types of power as “keyholder” power.
If you are looking for a theory of this, it sounds like capability-based security. The author may already know, but I thought I’d point it out. “Capabilities” are digital keys that can be shared but not forged. (Of course, by reductionism, nothing is truly unforgeable in physical reality except maybe some quantum-cryptography magic.)
Other comments have addressed your comparison of bee to human suffering, so I would like to set it aside and comment on “don’t eat honey” as a call to action. I think people who eat honey (except for near-vegans who were already close to giving it up) are not likely to be persuaded to stop. However, similar to meat-eaters who want to reduce animal suffering caused by the meat industry, they can probably be persuaded to buy honey harvested from bees kept in more suitable[1] conditions. For those people, you could advocate for a “free-range” type of informal standard for honey that means the bees were kept outside in warmer hives, etc. Outdoor vs. indoor is a particularly easy Schelling point. Even with the kind of cheating the “free-range” label has been subject to, it seems like it would incentivize beekeeping practices that are better for the bees.
- ↩︎
This can mean “more natural” in the sense of “the way bees are adapted to live in nature” but not necessarily “more natural” in the sense of using natural materials and pre-modern practices. The article “To save honey bees we need to design them new hives” linked in the post notes: “We already know that simply building hives from polystyrene instead of wood can significantly increase the survival rate and honey yield of the bees.” (Link in the original.)
- ↩︎
At the edge of my vision, a wiggling spoon reflected the light in a particular way. And for a split second my brain told me “it’s probably an insect”. I immediately looked closer and understood that it was a wiggling spoon. While it hasn’t happened since, it changed my intuition about hallucinations.
This matches my own experience with sleep deprivation in principle. When I have been severely sleep-deprived (sober; I don’t drink and don’t use drugs), my brain has started overreacting to motion. Something moving slightly in my peripheral vision caught my attention as if it were moving dramatically. This even happened with stationary objects that appeared to move as I shifted position. I have experienced about a dozen such false positives in my life and interpreted the motion as an insect only a couple of times. Most times it didn’t seem like anything in particular, just movement that demanded attention. However, “insect” seems an obvious interpretation when you suddenly notice small rapid motion in your peripheral vision. (“Suddenly” and “rapid” because your motion detection is exaggerated.) In reality, it was things like wind gently flapping a curtain.
However, this is not the only way people can hallucinate insects. There is another where they seem to see them clearly. Here is Wikipedia on delirium tremens:
Other common symptoms include intense perceptual disturbance such as visions or feelings of insects, snakes, or rats. These may be hallucinations or illusions related to the environment, e.g., patterns on the wallpaper or in the peripheral vision that the patient falsely perceives as a resemblance to the morphology of an insect, and are also associated with tactile hallucinations such as sensations of something crawling on the subject—a phenomenon known as formication. Delirium tremens usually includes feelings of “impending doom”. Anxiety and expecting imminent death are common DT symptoms.
From this and a few articles I have read over the years, I get a sense that when people are suffering from delirium tremens, they see small creatures of different types distinctly and vividly. So you can probably say there are “insect hallucinations” and “Huh? Is that motion an insect?” hallucinations.
i learned something about agency when, on my second date with my now-girlfriend, i mentioned feeling cold and she about-faced into the nearest hotel, said she left a scarf in a room last week, and handed me the nicest one out of the hotel’s lost & found drawer
— @_brentbaum, tweet (2025-05-15)
you can just do things?
— @meansinfinity
not to burst your bubble but isn’t this kinda stealing?
— @QiaochuYuan
What do people mean when they say “agency” and “you can just do things”? I get a sense it’s two things, and the terms “agency” and “you can just do things” conflate them. The first is “you can DIY a solution to your problem; you don’t need permission and professional expertise unless you actually do”, and the second is “you can defect against cooperators, lol”.
More than psychological agency, the first seems to correspond to disagreeableness. The second I expect to correlate with the dark triad. You can call it the antisocial version of “agency” and “you can just do things”.
If there were incisive rationalist-related videos which were setting the zeitgeist, where are they?
The YouTube channel Rational Animations seems pretty successful in terms of sheer numbers: 385K subscribers, which is comparable to YouTubers who talk about media and technology. Their videos “The True Story of How GPT-2 Became Maximally Lewd” and “The Goddess of Everything Else” have over two million views. Qualitatively, I have seen their biggest videos mentioned a few times where a LW post wouldn’t be. However, the channel principally adapts existing rationalist and AI-safety content. (Sort the videos by popular to see.) I think they’re good at it. Through their competence, new incisive rationalist-related videos exist—as adaptations of older incisive rationalist-related writing.
I don’t know of another channel like it, even though popular YouTube channels attract imitators, and it is hard to imagine them switching to new ideas. Part of it is the resources involved in producing animation compared to writing. With animation so labor-intensive, it makes sense to try out and refine ideas in text and only then adapt them to video. Posters on video-LW with original high-effort content would come to resent how much each mistake cost them compared to a textual post or comment. AI video generation will make it easier to create videos, but precise control over content and style will still demand significantly more effort than text.
GBDE, or Geom-Based Disk Encryption, has specific features for high-security environments where protecting the user is just as important as concealing the data. In addition to a cryptographic key provided by the user, GBDE uses keys stored in particular sectors on the hard drive. If either key is unavailable, the partition can’t be decrypted. Why is this important? If a secure data center (say, in an embassy) comes under attack, the operator might have a moment or two to destroy the keys on the hard drive and render the data unrecoverable. If the bad guys have a gun to my head and tell me to “enter the passphrase or else,” I want the disk system to say,
The passphrase is correct, but the keys have been destroyed
. I don’t want a generic error saying,Cannot decrypt disk
. In the first situation, I still have value as a blubbering hostage; in the latter, either I’m dead or the attackers get unpleasantly creative.Absolute FreeBSD, 3rd Edition, Michael W. Lucas (2018), chapter 23
“‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”
Commenting to register my interest: I would like to read this essay. As it stands, “Well-Kept Gardens” seems widely accepted. I can say I have internalized it. It may not have been challenged at any length since the original comment thread. (Please correct me with examples.)
I notice you seem to draw a distinction between “really has ADHD” and “just can’t concentrate”. You may want to read Scott’s “Adderall Risks: Much More Than You Wanted To Know” to dissolve this distinction and have a better framework for making your decision. Here is a central quote about it:
But “ability to concentrate” is a normally distributed trait, like IQ. We draw a line at some point on the far left of the bell curve and tell the people on the far side that they’ve “got” “the disease” of “ADHD”. This isn’t just me saying this. It’s the neurostructural literature, the the genetics literature, a bunch of other studies, and the the Consensus Conference On ADHD. This doesn’t mean ADHD is “just laziness” or “isn’t biological”—of course it’s biological! Height is biological! But that doesn’t mean the world is divided into two natural categories of “healthy people” and “people who have Height Deficiency Syndrome”. Attention is the same way. Some people really do have poor concentration, they suffer a lot from it, and it’s not their fault. They just don’t form a discrete population.
I was comparing software engineers I knew who were and weren’t engaged with rationalist writing and activities. I don’t think they were strongly selected for income level or career success. The ones I met through college were filtered the fact they had entered that college.
My impression is that rationalists disproportionately work at tier 1 or 2 companies. And when they don’t, it’s more likely to be a deliberate choice.
It’s possible I underestimate how successful the average rationalist programmer is. There may also be regional variation. For example, in the US and especially around American startup hubs, the advantage may be more pronounced than it was locally for me.
I continue to feel extremely confused by these posts. What the hell are people thinking when they say rationalists are supposed to win more?
Scott’s comment linked in another comment here sums up the expectations at the time. I am not sure if a plain list like this gives a different impression, but note that my sentiment for the talk wasn’t that rationalists should win more. Rather, I wanted to say that their existing level of success was probably what you should expect.
The average income of long-term rationalists seems likely to be 5-10x their most reasonable demographic counterparts, largely driven by a bunch of outlier successes in entrepreneurship (Anthropic alone is around $100B in equity heavily concentrated in rationalists), as well as early investments in crypto.
I find myself questioning this in a few ways.
Who do you consider the most reasonable demographic counterparts? Part of what prompted me to give the talk in 2018 was that, where I looked, rationalist and rationalist-adjacent software engineers weren’t noticeably more successful than software engineers in general. Professional groups seem highly relevant to this comparison.
If we look at the income statistics in SSC surveys (graph and discussion), you see American-tech levels of income (most respondents are American and in tech), but not 5×–10×. It does depend on how you define “long-term rationalists”.
Why evaluate the success of rationalists as a group by an average that includes extreme outliers? This approach can take you some silly places. For example, if Gwern is right about Musk, all but one American citizen with bipolar disorder could have $0 to their name, and they’d still be worth $44k on average[1]. Can you use this fact to say that bipolar American citizens are doing well as a whole? No, you can’t.
The mistaken expectations that built up for the individual success of rationalists weren’t built on the VC model of rare big successes. “We’ll make you really high-variance, and some of you will succeed wildly” wasn’t how people thought about LW. (Again, I think they were wrong to think this at all. I am just explaining their apparent model.)
Rationalists are probably happier than their demographic counterparts.
It’s a tough call. The median life satisfaction score in the 2020 SSC survey (picked as the top search result) is 8 on a 1–10 scale; the “mood scale” is 7. But then a third of those who answered the relevant questions say they either have a diagnosis of or think they may have depression and anxiety. The most common anxiety scores are 3, 2, then 7. A fourth has seriously considered suicide. My holistic impression is that a lot of rationalists online suffer from depression and anxiety, which are anti-happiness.
The core ideas that rationalists identified as important, mostly around the crucial importance of AI, are now obvious to most of the rest of the world, and the single most important issue in history (the trajectory of the development of AGI) is shaped by rationalist priorities vastly vastly above population baseline.
I agree, some rationalist memes about AI have spread far and wide. Rationalist language like “motte and bailey” has entered the mainstream. It wasn’t the case in 2018[2], and I would want discussions about rationalist success today to acknowledge it. This is along the lines of long-term, collective (as opposed to individual) impact that Scott talks about in the comment.
Of course, Eliezer disagrees that the AI part constitutes a success and seems to think that the memes have been co-opted, e.g., AI safety for “brand safety”.
What the hell do people want?
I think they want superpowers, and some are (were) surprised rationality didn’t give them superpowers. By contrast, you think rationalists are individually quite successful for their demographics, and it’s fine. I think rationalists are about as successful as their demographics, and it’s fine.
- ↩︎
According to Forbes Australia, Elon Musk’s net worth is $423 billion. Around 2.8% of the US population of approximately 342 million is estimated by the NIH to be bipolar, giving approximately 9.6 million people. 423 000 ÷ 9.6 = 44 062.5.
- ↩︎
Although rationalist terminology had had a success(?) with “virtue signaling”.
- ↩︎
Thanks a lot! It’s a good comment by Scott on Sailor Vulcan’s post. I have added it and your other links to the page’s “see also” on my site.
I like this paragraph in particular. It captures the tension between the pursuit of epistemic and instrumental rationality:
I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says “Hey, are we sure we shouldn’t go back to being pure truth-seekers?”, it’s going to be a very different community that discusses the answer to that question.
I think we have an example of the first part because it has happened with the postrationalists. As a group, postrationalists are influenced by LW but embrace weaker epistemic norms for what they consider practical reasons. A major theme in “a postrationalist syllabus” is superficially irrational beliefs and behaviors that turn out to be effective, which (generalizing) postrationalists try to harness. This exacerbates the problem of schools proliferating without evidence, as reflected in this joke.
dbohdan’s Shortform
Why don’t rationalists win more?
The following list is based on a presentation I gave at a Slate Star Codex meetup in 2018. It is mirrored from a page on my site, where I occasionally add new “see also” links.
Possible factors
Thinkers vs. doers: selection effects [1] and a mutually-reinforcing tendency to talk instead of doing [2]
Theoretical models spread without selection [2]
Inability and unwillingness to cooperate [2]
People who are more interested in instrumental rationality leave the community [2]
Focusing on the future leads to a lack of immediate plans [2]
Pessimism due to a focus on problems [1]
Success mostly depends on specific skills, not general rationality [1]
Online communities are fundamentally incapable of increasing instrumental rationality (“a chair about jogging”) [3]
Sources
“Why Don’t Rationalists Win?”, Adam Zerner (2015)
“The Craft & The Community—A Post-Mortem & Resurrection”, bendini (2017)
“Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality”, Patri Friedman (2010)
See also
“What Is Rationalist Berkeley’s Community Culture?”, Zvi Mowshowitz (2017)
“Slack Club”, The Last Rationalist (2019)
“Where are All the Successful Rationalists?”, Applied Divinity Studies (2020)
“Rationality !== Winning”, Raemon (2023)
One moral of this story is, there is no such thing as “too easy” for getting things done.
It’s interesting to think about why you wouldn’t automate a production process. I see six overlapping goals that you can pursue when you choose to do something by hand:
Getting things done. Your goal is to achieve the result by any means, and manual work is the only or the best means available.
Optimization. You can sometimes perform a task better with bespoke materials, tools, and methods. This normally doesn’t imply the whole product must be made the same way.
Assigning rank. Your goal is to establish who is the best. Automation is as appropriate as driving a car in a marathon. (You may still try to drive a car in a marathon if formal rather than actual rank is what you care about. Automation can be used for cheating.)
Making art. The execution is in service of an aesthetic goal.
Maintaining skill. You want to learn and then remember how to achieve the result or to be able to replicate it under conditions where automation isn’t available.
Working more pleasantly. Doing things manually can mean less pressure and stress.
There is also a seventh motivation that isn’t really a goal:
Being satisfied with the status quo or stuck in your ways. You don’t want to automate because doing things manually works well for you. Or maybe it doesn’t work well, but as a human you are a creature of habit. Rightly or wrongly, you also aren’t concerned about the competition.
It sounds like the “real programmers” in the Hamming quote who didn’t want to move on from binary to assembly and then to a high-level language wanted to optimize (“it would be too wasteful of machine time and capacity”, so goal 2), were creatures of habit who didn’t worry about the competition (7), and cared about rank (“no respectable programmer would use it—it was only for sissies!”, 3).
What I wonder is, how did it feel for them from the inside? Besides 2, 3, and 7, did they think they were maintaining skill? This is a reason I have heard repeatedly from programmers who moderate their use of LLMs. From a comment by user lesser23 on a recent Hacker News story:
I have found that after around a month of being forced to use them I felt my skill atrophy at an accelerated rate. It became like a drug where instead of thinking through the solution and coming up with something parsimonious I would just go to the LLM and offload all my thinking. For simple things it worked okay but it’s very easy to get stuck in a loop.
I’ve heard similar comments from friends. The concern seems real enough, and I have responded to it by writing some of the code I know an LLM could write for me and by looking for better solutions to my problems than an LLM would come up with.
I was looking for a term for my diet that indicated adherence and came up with “intermittent intermittent fasting”. (“Time-restricted time-restricted eating” doesn’t have the same ring.)