I’m an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality. (Longer bio.)
This post feels to me in some ways like the first chapter of a religious teaching. The post keeps talking about wholesomeness in a way where I have a (perhaps unjustified) sense the post is pretending or expecting me to know what it means, and talking like it has successfully explained it, but I’m not sure it succeeds (e.g. the circular definition for how to make wholesome decisions), and that feels common for religious texts about how to live a good life.
Pretty good essay. On first pass, I don’t feel like this post manages to fully communicate the concept of wholesomeness well enough to pin it down for someone who didn’t already know what this post was trying to communicate. I shall give it a quick go.
When I am choosing an action and justifying it as wholesome, what it often feels like is that I am trying to track all the obvious considerations, but some (be it internal or external) force is pushing me to ignore one of them. Not merely to trade off against it, but to look away from it in my mind. And agains that force I’m trying to defend a particular action as the best one call all things considered—the “wholesome” action.
I am having a hard time thinking of examples, in part because I think I’ve been doing better on this axis in recent years, but I think one of the most tempting versions of this to me has been to ignore people’s feelings and my impacts on them when I have a mission that is very important. For instance, I might think someone has done terribly at some work that they’re doing on a project I’m leading. Now, I think it’s good to be straight with people and it’s good communication to give feedback early and clearly. So I want to let them know that the work has been worse than useless and I regret handing it off to them. This will likely cause them some fear and feel destabilizing to their social status and that will cause them stress and who knows how they deal with that. It is tempting here for me to choose not to pay attention to that when I decide to give them feedback, and as I do so, and after. And I have a great justification—because the work is exceedingly important! And if they say “Ben I feel like you’re being hurtful and not caring about my feelings” I can say “But this is what I have to do for the mission! It’s important! We all agree on that!” And nobody around will disagree because it’s often been the core conceit of my social groups that the only reason we’re here, the only reason we do what we do, is because we think it’s important. And your feelings don’t weigh on the scales of making the project hurt.
(That all may be true, but it doesn’t justify me avoiding seeing the direct impacts of my actions. When I notice this, sometimes I have impulses to do other things too—like reassure them in other ways, or show that I still respect them for other things they’ve done, or pick a time and place that is less likely to be embarrassing, or a dozen other things depending on the context. Admitting this to myself and acting on this fees more “wholesome” to me.)
Anyway, I used to be much more willing to stop caring about the impact of my behavior directly on people if I felt that it would distract from getting the important things done. Now I aim to be fully aware, even though it’s actually hard and often quite painful to still go ahead with it while doing that.
Writing this out now I even notice a way I’ve not been very wholesome in my interactions with some individuals I’ve interacted with of late. Noticing why, is not sufficient to solve it, alas, because I am quite allergic to some aspects of the relationship and I think it would be counterproductive to just have it naively be present in our interactions, I might easily act poorly and just make things worse. (I want to think on this more.)
Once you look at the whole of your impact on someone, then you can decide for yourself whether or not to do it. Of course, often you will choose to hurt someone. Reporting a violent crime or theft to the police is often the better decision even though it hurts the individual who broke a law. Even if you look directly at the costs it imposes on them (and the benefits it imposes on their future victims as well as the benefits of maintaining a shared rule of law) I typically do not change my mind about whether I will choose to do something that hurts them.
Overall I would ask yourself “What parts of life and the world do I instinctively turn my attention away from when it comes up?”, and then try to expand the things you can look directly at when making decisions. But I think this is probably assuming already a high level of self-awareness that has some other pre-requisites.
Perhaps an easier question is “What decisions have I made that really hurt me to a surprising degree and what did information I turn my attention away from when I made those decisions that could have guided me better and why did I ignore it?” And then use that to notice when you’re susceptible from making less wholesome decisions in your life.
(…having now finished reading the post I see you do talk about this aspect of wholesomeness later on, in the section “Wholesome vs virtuous vs right?”)
...after two readings of this obviously awful recommendation I have come to believe that it is a joke.
I often wish I had a better way to concisely communicate “X is a hypothesis I am tracking in my hypothesis space”. I don’t simply mean that X is logically possible, and I don’t mean I assign even 1-10% probability to X, I just mean that as a bounded agent I can only track a handful of hypotheses and I am choosing to actively track this one.
This comes up when a substantially different hypothesis is worth tracking but I’ve seen no evidence for it. There’s a common sentence like “The plumber says it’s fixed, though he might be wrong” where I don’t want to communicate that I’ve got much reason to believe he might be wrong, and I’m not giving it even 10% or 20%, but I still think it’s worth tracking, because strong evidence is common and the importance is high.
This comes up in adversarial situations when it’s possible that there’s an adversarial process selecting on my observations. In such situations I want to say “I think it’s worth tracking the hypothesis that the politician wants me to believe that this policy worked in order to pad their reputation, and I will put some effort into checking for evidence of that, but to be clear I haven’t seen any positive evidence for that hypothesis in this case, and will not be acting in accordance with that hypothesis unless I do.”
This comes up when I’m talking to someone about a hypothesis that they think is likely and I haven’t thought about before, but am engaging with during the conversation. “I’m tracking your hypothesis would predict something different in situation A, though I haven’t seen any clear evidence for privileging your hypothesis yet and we aren’t able to check what’s actually happening in situation A.”
A phrase people around me commonly use is “The plumber says it’s fixed, though it’s plausible he’s mistaken”. I don’t like it. It feels too ambiguous with “It’s logically possible” and “I think it’s reasonably likely, like 10-20%” and neither of which is what I mean. This isn’t a claim about its probability, it’s just a claim about it being “worth tracking”.
I could say “I am privileging this hypothesis” but that still seems to be a claim about probability, when often it’s more a claim about importance-if-true, and I don’t actually have any particular evidence for it.
I often say that a hypothesis is “on the table” as way to say it’s in play without saying that it’s probable. I like this more but I don’t feel satisfied yet.
TsviBT suggested “it’s a live hypothesis for me”, and I also like that, but still don’t feel satisfied.
How these read in the plumber situation:
“The plumber says it’s fixed, though I’m still going to be on the lookout for evidence that he’s wrong.”
“The plumber says it’s fixed, though it’s plausible he’s wrong.”
“The plumber says it’s fixed, and I believe him (though it’s worth tracking the hypothesis that’s he’s mistaken).”
“The plumber says it’s fixed, though it’s a live hypothesis for me that he’s mistaken.”
“The plumber says it’s fixed, though I am going to continue to privilege the hypothesis that he’s mistaken.”
“The plumber says it’s fixed, though it’s on the table that he’s wrong about that.”
Interested to hear any other ways people communicate this sort of thing!
Added: I am reacting with a thumbs-up to all the suggestions I like in the replies below.
I make space in my week to be bored, and where there are no options for short-term distractions that will zombify me (like videogames or YouTube). I usually find that ideas come to me then that I want that take a bit more work but will be more satisfying, like learning a song on the guitar or reading a book or doing something with a friend.
Chatting with friends who are alive and wanting things is another way I notice such things in myself, usually I catch some excitement from them as I’m empathizing with them.
I wrote the above before reading Anna’s comment to see how our answers differed; seems like our number 1 recommendation is the same!
Cleaning things out also works for me, I did that yesterday and it helped me believe in my ability to make my world better.
I also concur with the grieving one, but I never know how to communicate it. When I try, I come up with sentences like “Now vividly imagine failing to get the thing you want. Feel all the pain and sorrow and nausea associated with it. Great, now go get it!” but that doesn’t seem to communicate why it helps and reads to me like unhelpful advice.
I’m reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.
I definitely appreciate that someone signing this writes this reasoning publicly. I think it’s not crazy to think that it will be good to happen. I feel like it’s a bit disingenuous to sign the letter for this reason, but I’m not certain.
Someone told me that they like my story The Redaction Machine.
I am extremely surprised to read that Russia has such a harsh gender ratio (86 men for every 100 women), that’s way more aggressive than even China (105 men to 100 women).
I wanted to know why and so I interrogated ChatGPT for a bit, it explained the following:
The Soviet Union lost around 14% of its population during WWII whereas the UK and France each only lost around 2%.
Also (somehow) life expectancy for Russian men is 68, whereas for UK and French men it’s like 79, and yet the women of Russia is like 78 (much closer to the UK and France’s 83 and 85 respectively).
I am surprised I haven’t seen more thinkpieces written about gender dynamics in Russia, which I expect would sway heavily in the men’s favor as they’re the minority.
I also generally update down on Russia’s health and competence at war.
This was a fun read and felt (for me) more simple and follow-able than most things I’ve read explaining math! Thank you.
I got up to the sums of powers of 2 being −1. That bit took a few close reads but I followed that there was a pattern where infinite sums of (1,r,r2,r3,…) equal 11−r, and there’s reason to believe this holds for r between −1 and 1. Then you write that if we apply it to r=2 then it’s equal to −1, which is a daring question to even ask and also a curious answer! But what justification do you have for thinking that equation holds for r that aren’t between −1 and 1? I think that this was skipped over (though I may be missing something simple).
(Also, if it does hold for numbers that aren’t between −1 and 1 that I believe this also implies that all infinite sums of rn equal to negative numbers, and suggests maybe all infinite sums of positive integers will too.)
One thing to do here is to re-write their arguments in your own (ideally more neutral) language, and see whether it still seems as strong.
To me, this felt like a basically successful revival of a LessWrong tradition I enjoyed. Thank you to everyone who took the time to fill it out.
Agreed, great job Skyler!
Curated. This is a thoughtful and clearly written post making an effort to capture an part of human cognition that we’ve previously not done a good job of capturing in earlier discussions of human rationality.
Insofar as this notion holds together, I’d like to see more discussion of questions like “What are the rules for coming to believe in something?” and “How does one come to stop believing in something?” and “How can you wrongly believe in something?”.
The fabrication of options is, I claim, one example of flinching. It’s one of the things we do, as humans, when we feel ourselves about to be forced into choosing an uncomfortable path. There’s a sense of “surely not” that sends our minds in any other available direction, and if we’re not careful—if we do not actively hold ourselves to a certain kind of stodgy actuarial insistence-on-clarity-and-coherence—we’ll more than likely latch onto a nearby pleasant fiction without ever noticing that it doesn’t stand up to scrutiny.
One of my rationalist vices is that I will, on reading a sentence like this, flinch away from it being true about me, and fabricate the option to decide that (starting from now) I will “simply choose not to flinch away” in those situations.
Now, I’m not saying it never works to just decide to do better, but it commonly at least requires both (a) noticing it and (b) having some local slack to process the flinch and then still move toward the thing.
I think a better default for me is to try to be conscious of the flinch. I can overcome it if I have the strength/slack in the moment. And if not, I will unfortunately just ride the flinch (as I have been doing, but formerly I may not have been aware of it).
“I believe in this team, and believe in our ability to execute on our goals” does not naturally translate into “I value this team, and value our ability to execute on our goals”.
My read is that the former communicates that you’d like to invest really hard in the assumption that this team and its ability to execute are extremely high, and invest in efforts to realize this outcome; and my read is that the latter is just stating that it’s currently high and that’s good.
The theme of “believing in yourself” runs through the anime Gurren Lagann, and it has some fun quotes on this theme that I reflect on from time to time.
The following are all lines said to the young, shy and withdrawn protagonist Simon at times of crisis.
Kamina, Episode 1:
Listen up, Simon. Don’t believe in yourself. Believe in me! Believe in the Kamina who believes in you!
Kamina, Episode 8:
Don’t forget. Believe in yourself. Not in the you who believes in me. Not the me who believes in you. Believe in the you who believes in yourself.
Nia, Episode 15:
If people’s faith in you is what gives you your power, then I believe in you with every fiber of my being!
Curated! This is a nice, short post that feels to me like it accurately describe a bunch of things in my environment over the last decade that nobody ever wrote down in one place before.
I’d be interested if commenters can add other attitudes about applied rationality that they’ve noticed — I agree with niplav that there’s a “Math Theory” attitude where “if you understand the relevant mathematics deeply enough that constitutes rationality”.
(I slightly wish that the title was changed along with the change you made to the post, e.g. “Attitudes about Applied Rationality”.)
The justification part of this post reads to me almost like a parody, and wantonly destructive.
Relevant time stamp is about 4:20 to 5:00. The no-mask shots have the vague look of a filmic ‘re-enactment’ to me, also does anyone know if this video is from before or after the pandemic?
I observe that literally every country in the world chose not to do challenge trials in 2020, which could have sped up the vaccine rollout by around 8 months and prevented a great deal of the deaths in those countries. For hundreds of countries to all do the same thing here looks to me exceedingly like conformity (starting with some trend-setter, which I expect is the US). So I think that beliefs like this can quite easily explained by conformity.