The strongest motivated reasoning comes from cases where you feel obligated to do something. If it’s locked in, you may as well cope about it, rather than continuing to negatively think about it.
BindingLoop
A: We believe it is false. We acknowledge that this is a minority position among physicists, and that we have a conflict of interest.
https://phys.org/news/2025-07-physicists-quantum-world-years.html
Believing the MWI is false is actually a supermajority position among physicists, only 15% affirmatively state that it is true compared to 36% for the Copenhagen interpretation.
I think this would be a very good example of poor epistemics that should be examined carefully. At this point, it’s basically confirmed that Imane Khelif is genotypically XY, but I don’t really care about that specifically, I want to adjudicate whether you would be able to know this in 2024 when it first came out, because I think the answer is absolutely 100% yes.
If the IBA was going to frame someone, they would do it with failed doping violations, not chromosomes. Literally at any time, Khelif could have gotten tested and disproved the IBA, yet never went through with it. Khelif has not participated in any boxing tournament that would require DNA testing and will never do so, as she has admitted to be intersex at this point, but this was obvious from the beginning.
Lin Tuying, on the other hand, has undergone genetic testing.
Not having anything to do, no, because I always find something to do. I can tolerate not having any electronics or books or anything for several hours, I will just think of things. But I probably would be in the bottom 15% of resistance to going crazy in complete isolation, I would guess. I actually like silence. I was surprised by how much I like silence when I visited an isolated area of Yellowstone National Park where there was (to the best ability of my ears) literally zero ambient noise. It was very relaxing.
But, I do get anxious when realizing that I am dreaming, and immediately start thinking about dream characters morphing into demons and turning against me. I also don’t like going to sleep without listening to a video.
On a small aside, in reference to the meditation thing, I think I saw in the comments of one of Scott Alexander’s blog posts a long time ago (I know, such good provenance, can’t find it atm) that a certain percentage of people are psychologically vulnerable to meditation. I’m fairly certain I’m one of those people. I can’t handle psychedelics, including weed, and I get paranoid and anxious when meditating.
This certainly seems to be the case with Trump’s (in my understanding) limited ability to govern blue states and cities.
Can you elaborate on that? I’ve never heard anyone say that before.
Something along this line of reasoning has caused me to abandon my normal career and start self-studying math so that I can directly work in AI alignment.
Richard is saying that in the hypothetical world in which AGI was proven to be hypothetically impossible or something of that nature, the cluster of people who can be referred as belonging to the rationalist—EA[1] set would be trying to solve aging and perfecting cryonics, whereas the cluster of people in the EA—rationalist set would be into global health and ending factory farming.
You had a (critique?) of rationalists in that they didn’t have motive force or coordination capacity to do much beyond AI safety, but Richard is saying that’s because AI safety took all the talent of the rationalist movement. If AI never existed, obviously, those rationalists would be doing something else.Maybe you could trying to attack the hypothetical from a counterfactual angle? That the people in a hypothetical AI-less world wouldn’t have even coalesced around anything without AI safety, so there wouldn’t even be an organized community around cryonics and aging? Or that even in our current world, rationalists should have gone into cryonics and aging even with AI looming over our heads?
I think the idea that rationalists in an AI-less counterfactual world would have gone into cryonics and aging is not at all disproven by showing that rationalists in an AI world have not revolutionized cryonics and anti-aging. That doesn’t grok to me at all, I agree with Richard here.
- ^
There’s likely not that many people in the pure rationalist—EA set, but I’m referring to dispositions and norms here, the set of self-identified rationalists who are more further away from EA.
- ^
What is your opinion on the recent developments of LLMs? I feel the last 9 months since your comment was made have shown they are not slowing down. I don’t think the recent METR evals show a superexponential growth, as the tasks are saturated at the high end (80% success is more in line with my model of reality), but they are still on the post-2024 higher exponential with the introduction of reasoning models. My not-super-expert knowledge is that they’re still massively scaling up and that represents the majority of the improvements, though not discounting improved algorithmic efficiency.
One of my main near-term concerns right now is that they get good at certain scientific research tasks like developing novel viruses that can be made by ordering proteins online before they become “agentic.” We’re already seeing novel mathematical and physics research, at the low end, of course, with experts say this is mostly because of time constraints on human intelligence, but that will probably change over the next 2 years so that they will actually solve problems that had humans working on them, in my opinion.
The main reason that I use sunscreen is vanity. Sunscreen is the ultimate cosmetic anti-aging agent; nothing can even come close. I put sunscreen on my face almost every day, including in winter. It is very simple to look 10+ years younger than your actual age if you avoid alcohol, cigarettes, and wear sunscreen everyday. Huge +EV move.