Life is about joyful explorations.
I agree. I don’t think this kind of behaviour is the worst thing in the world. I just think it is instrumentally irrational.
Premise: people are fundamentally motivated by the “status” rewarded to them by those around them.
I have experienced the phenomenon of demandingness described in your post, and you’ve elucidated it brilliantly. I regularly frequent in-person EA events, and I can visibly see status being rewarded according to impact, which is very different from how it’s typically rewarded in the broader society. (This is not necessarily a bad thing.) The status hierarchy in EA communities goes something like this:
People who’ve dedicated their careers to effective causes. Or philosophers at Oxford.
People who facilitate people who’ve dedicated their careers to effective causes, e.g. research analysts.
People who donate 99% of their income to effective causes.
People who donate 98% of their income to effective causes.
People who donate 1% of their income to effective causes.
People who donate their time and money to ineffective causes.
People who don’t donate.
People who think altruism is bad.
This hierarchy is very “visible” within the in-person circles I frequent, being enforced by a few core members. I recently convinced a non-EA friend to tag along, and following the event, they said, “I felt incredibly unwelcomed”. Within 5 minutes, one of the organisers asked my friend, “What charities do you donate to?” My friend said, “I volunteer at a local charity, and my SO works in sexual health awareness.” Following a bit of back and forth debate, the EA organiser looked disappointed and said “I’m confused.”, then turned his back on my friend. [This is my vague recollection of what happened, it’s not an exact description, and my friend had pre-existing anti-EA biases.]
Upholding the core principles of EA is necessary. Without upholding particular principles at the expense of the rest, the organisation ceases. However, the thing about optimisation and effectiveness is that if we’re naively and greedily maximising, we’re probably doing it wrong. If we are pushing people away from the cause by rewarding them with low status as soon as we meet them, we will not be winning many allies.
If we reward low status to people who don’t donate as much as others, we might cause these people to halt their donations, quit our game, and instead play a different game in which they are rewarded with relatively more status.
I don’t know how to solve this problem either, and I think it is hard. We can only do so much to “design” culture and influence how status is rewarded within communities. Culture is mostly a thing that just happens due to many agents interacting in a world.
I watched an interview with Toby Ord a while back, and during the Q&A session, the interviewer asked Ord:
Given your analysis of existential risks, do you think people should be donating purely to long-term causes?
Ord’s response was fantastic. He said:
No. I do think this is very important, and there is a strong case to be made that this is the central issue of our time. And potentially the most cost-effective as well. Effective Altruism would be much the worse if it specialised completely in one area. Having a breadth of causes that people are interested in, united by their interest in effectiveness is central to the community’s success. [...] We want to be careful not to get into criticising each other for supporting the second-best thing.
Extending this logic, let‘s not get into criticising people for doing good. We can argue and debate how we can do good better, but let’s not attack people for doing whatever good they can and are willing to do.
I have seen snide comments about Planned Parenthood floating around rationalist and EA communities, and I find them distasteful. Yeah, sure, donating to Malaria saves more lives. But again, the thing about optimisation is that if we are pushing people away from our cause by being parochial, then we’re probably doing a lousy job at optimising.
Loved your comment, especially the “goodharting” interjections haha.
Your comment reminded me of “building” company culture. Managers keep trying to sculpt a company culture, but in reality managers have limited control over the culture. Company culture is more a thing that happens and evolves, and you as an individual can only do so much to influence it this way or that way.
Similarly, status is a thing that just happens and evolves in human society, and sometimes it has good externalities and other times it has bad externalities.
I quite liked “What You Do Is Who You Are” by Ben Horowitz. I thought it offered a practical perspective on creating company culture by focusing on embodying the values you’d like to see instead of just preaching them and hoping others embody them.
I recently read Will Storr’s book “The Status Game” based on a LessWrong recommendation by user Wei_Dai. It’s an excellent book, and I highly recommend it.
Storr asserts that we are all playing status games, including meditation gurus and cynics. Then he classifies the different kinds of status games we can play, arguing that “virtue dominance” games are the worst kinds of games, as they are the root of cancel culture.
Storr has a few recommendations for playing the Status Game to result in a positive-sum. First, view other people as being the heroes of their own life stories. If everyone else is the hero of their own story, which character do you want to be in their lives? Of course, you’d like to be a helpful character.
Storr distils what he believes to be “good” status games into three categories. They are:
Warmth: When you are warm, you communicate, “I’m not going to play a dominance game with you.” You imply that the other person will not get threats from you and that they are in a safe place around you.
Sincerity: Sincerity isn’t just about being nice. Sincerity is also about levelling with other people and being honest with them. It signals to someone else that you will tell them when things are going badly and when things are going well. You will not be morally unfair to them or allow resentment to build up and then surprise them with a sudden burst of malice.
Competence: Competence is just success and it signals that you can achieve goals and be helpful to the group.
I thought this book offered an interesting perspective on an integral aspect of being a human, status.
Hello, thank you for the post!
All images on this post are no longer available. I’m wondering if you’re able to imbed them directly into the rich text :)
This post has brilliantly articulated a crucial idea. Thank you!
Microfoundations for macroeconomics is a step in the right direction towards a gears-level understanding of economics. Still, our current understanding of cognition and human nature is primarily based on externally-visible behaviour and not on gears. Do you think we are progressing in the right direction within microeconomics towards more gears-level agent models?
I read the arguments against microfoundations, and some opponents point to “feedback loops”. They claim that the arrow of causation is bidirectional between agent behaviour and macroeconomics. For example, agents anticipating an interest rate increase change how they behave. Curious to know what you think about this line of argumentation.
Causation goes from the lower levels to the higher levels. E.g. we cannot choose to change the laws of physics, but the laws of physics entirely cause everything we experience. Are these “feedback loops” an illusion created by our confusion and lack of gears-level causal understanding, or are they actual gears?
This reminds me of the book “Four Thousand Weeks”. The core idea is that if you become productive at doing something, then society will want you to do more of that thing. For example, if you were good at responding to email, always prompt, and never missing an email, society would send you more emails because you had built a reputation of being good at responding to email.
Excellent post, thanks, Eli. You’ve captured some core themes and attitudes of rationalism quite well.
I find the “post” prefix unhelpful whenever I see it used. It implies a final state of whatever it is referring to.
What meaning of “rationality” does “post-rationality” refer to? Is “post-rationality” referring to “rationality” as a cultural identity, or is it referring to “rationality” as a process of optimisation towards achieving some desirable states of the world?
There is an important distinction between someone identifying as a rationalist but acting in self-defeating and antisocial ways and the abstract concept of optimisation itself.
I started attending in-person LessWrong meetups a few months ago, and I’ve found that they attract a wide range of people. Of course, there are the abrasive “truth-seekers” who won’t miss an opportunity to make others feel terrible for saying anything that they deemed to be factually or morally imperfect. However, on the whole, this is not much different from any other group of people I engage. I fail to see how prefixing a word with “post” solves any problems.
Oh, it wouldn’t eliminate all selection bias, but it certainly would reduce it. I said “avoid selection bias,” but I changed it to “reduce selection bias” in my original post. Thanks for pointing this out.
It’s tough to extract completely unbiased quasi-experimental data from the world. A frail elder dying from a heart attack during the volcanic eruption certainly contributes to selection bias.
A missing but essential detail: the government compensated these people and provided them with relocation services. Therefore, even the frail were able to relocate.
Recently I came across this brilliant example of avoiding reducing selection bias when extracting quasi-experimental data from the world towards the beginning of the book “Good Economics for Hard Times” by Banerjee and Duflo.
The authors were interested in understanding the impact of migration on income. However, most data on migration contains plenty of selection bias. For example, people who choose to migrate are usually audacious risk-takers or have the physical strength, know-how, funds and connections to facilitate their undertaking,
To reduce these selection biases, the authors looked at people forced to relocate due to rare natural disasters, such as volcano eruptions.
Words cannot possibly express how thankful I am for you doing this!
I bet that most of them would replicate flawlessly. Boring lab techniques and protein structure dominate the list, nothing fancy or outlandish. Interestingly, the famous papers like relativity, expansion of the universe, the discovery of DNA etc. don’t rank anywhere near the top 100. There is also a math paper on fuzzy sets among the top 100. Now that’s a paper that definitely replicates!
Yep. Where I work, we call it DDD. Deadline Driven Development.
Excellent article! I agree with your thesis, and you’ve presented it very clearly.
I largely agree that we cannot outsource knowledge. For example, you cannot outsource the knowledge to play the violin, and you must invest in countless hours of deliberate practice to learn to play the violin.
A rule of thumb I like is only to delegate things that you know how to do yourself. A successful startup founder is capable of comfortably stepping into the shoes of anyone they delegate work to. Otherwise, they would have no idea what high-quality work looks like and how long work is expected to take. The same perspective applies to wanting to cure ageing with an investment of a billion dollars. If you don’t know how to do the work yourself, you have little chance of successfully delegating that work.
Do you think outsourcing knowledge to experts would be more feasible if we had more accurate and robust mechanisms for distinguishing the real experts from the noise?
The orb-weaving spider. I updated my original post to include the name.
Excellent write-up. Thanks, Elizabeth.
I’m a software engineer at a company that implements a “20%”. Every couple of months, we have a one (sometimes two) week sprint for the 20%. As you’ve pointed out, it works out to be less than 20%, and many engineers choose to keep working on their primary projects to catch up on delivery dates.
In the weeks leading up to the 20% sprint, we create a collaborative table in which engineers propose ideas and pitch those ideas in a meeting on the Monday morning of the sprint. Proposals fall into two categories:
Reducing technical debt. E.g. deprecating the usage of an old library.
Prototyping a new idea. E.g. trying out the performance of a new library.
I find the 20% sprints very valuable. A lot of the time, there is work I would like to be done that doesn’t fit well within “normal” priorities. I believe such work to be valuable based on my experience and knowledge. However, it doesn’t have sufficient visibility from the perspectives of the higher levels. Therefore, this sort of work would never make its way into our everyday work without the 20% sprint.
On the mating habits of the orb-weaving spider:
These spiders are a bit unusual: females have two receptacles for storing sperm, and males have two sperm-delivery devices, called palps. Ordinarily the female will only allow the male to insert one palp at a time, but sometimes a male manages to force a copulation with a juvenile female, during which he inserts both of his palps into the female’s separate sperm-storage organs. If the male succeeds, something strange happens to him: his heart spontaneously stops beating and he dies in flagrante. This may be the ultimate mate-guarding tactic: because the male’s copulatory organs are inflated, it is harder for the female (or any other male) to dislodge the dead male, meaning that his lifeless body acts as a very effective mating plug. In species where males aren’t prepared to go to such great lengths to ensure that they sire the offspring, then the uncertainty over whether the offspring are definitely his acts as a powerful evolutionary disincentive to provide costly parental care for them.
Source: “The Social Instinct” by Nichola Raihani.