Habitual mental motions might explain why people are content to get old and die

The fifth post in my recent series. More rambly than ideal, but Inkhaven is relentless.

People are perplexing. They do not seem to follow through on the logical conclusion of the information they have in combination with the values they have.

The most blunt example I have: I think it is neither secret nor controversial that civilization is more technologically advanced than it was a few centuries ago. In particular, we are more medically advanced. We have cures for many conditions, and people live longer and remain healthier for longer than in the past[1]. Clearly, change and improvement are possible – who knows how much?

People don’t like aging and death. People bemoan getting old and try to hide signs of aging at great expense. Death is regarded as tragic, and once someone has a serious condition, people invest in fighting. Up close, people try to delay death.

The logical conclusion of “we have great reason to think progress against aging and death is possible” and “we don’t like aging and death”, is to invest a lot more into anti-aging efforts.

People should clamor for funding for research, demand results, demand explanations for lack of results, it should be prestigious to work in, it should be on the news. It should be a big deal.

There’s not nothing. Bryan Johnson is doing his thing. The SNES Research Foundation and other groups exist. But the typical person still expects to get old and die[2]. When they get close to it they try to delay it, but before then it is accepted as a fact of life[3].

This is crazy. This is really really crazy. It’s so inconsistent according to their own values. What is going on in their heads?

I’ve wondered about this for many years. In 2019, it felt like I had some clue in modeling some people living in Causal Reality and others in Social Reality. I think that’s not entirely wrong, but very low resolution. With this series of posts, I have been building towards a better answer.


Let’s examine the line of reasoning I implied was obvious and transparent: (1) as evidenced by history, large medical progress is likely[4] possible, (2) overcoming death and aging is desirable, therefore (3) we should be investing in this.

There are specific mental motions occurring in my mind that cause the above to seem correct and to shape my behavior[5]. Those motions feel natural to me, but perhaps they’re actually atypical.

Though the following is high-level, I actually think many of the relevant motions where I differ are much lower-level, akin to muscle contractions rather than macro movements. But it’s much easier to talk about macro stuff. See this helpful comment from Steven Byrnes, which links to a discussion of lower-level traits.

Hidden assumptions and traits relevant to my thinking:

  • Large-scale agency: I believe myself to have the ability to influence very macro things through my actions

  • History realism: past events and world state aren’t like fiction, a story in a book, but actually something real that happens and is relevant to my life.

  • Future realism: likewise, the future is a real place that I’m going to, and that can be one or another shaped by human actions.

  • History/​present/​future distinction: the present isn’t the only way things can be

  • Reductionism and “mechanicalism”: the human body and its conditions are like a car – a system made of parts that can be intervened on. Nothing magical, nothing essential. Just atoms and molecules and cells.

  • Willingness to endure uncomfortable beliefs: negative things can feel worse if they feel avoidable. Believing that death is not inevitable means accepting needless deaths, and might require effort from someone. That’s an uncomfortable conclusion I might have bounced off of.

  • Willingness to hold a weird conclusion: most people don’t live like aging and death should be fought, to take that up would be weird and likely get a negatively valenced reaction from others. That doesn’t faze me.

Alright, now let’s tell a story about me. This is illustrative and speculative, but maybe it is right. In my youth, I was went to an ultra-Orthodox Jewish school[6]. I was taught that Jews are God’s chosen people, that Jews are why God created the world, and that Jews, through their actions, would bring about the Messianic era (utopia). I believed it. It was the endorsed social belief. So yeah, pretty natural to me to think my actions affect Everything.

Likewise, Orthodox Jewish practice is chronically justified and explained with reference to past events that are treated as not a metaphor, no, that actually happened. The Jews were in Egypt, and that’s why we have Passover. God commanded Abraham to circumcise; that’s why we do it too. Very real.

The group I was involved in made the Messianic era a big focus. It was imminent. Soon, any year or decade now. Big changes. BIG. It was normal and expected to think that. It’d be weird and heretical to predict things continuing as they are.

I was always technically inclined. Things are made of parts. They break, you fix them. I naturally saw things in a reductionistic way. I also think I’ve always had limited conformity instincts, such that my mind doesn’t put up resistance to believing weird things.

Even though I am now thoroughly disabused of the specific religious claims, the broader hypotheses seem like normal and reasonable things to believe. I might affect the entire world? Yeah, I was brought up thinking that. The past and future are real and might be different from the present? I mean, yeah, duh. Plus, growing up in an insular religious community does get you used to believing some wacky things that most other people don’t. You just feel a little superior for knowing the truth.

Not having grown up in any other environment, I really don’t know what messages other people got. Did the events of Macbeth, Moby Dick, and Magna Carta get lumped into the same bucket as not really relevant to life? All just stories.

What is real anyway?

I have another half-formed idea that a key variable between people is what seems real. Real defined as being actually plugged into one’s decision-making. This is related to Taking Ideas Seriously but broader. I think you can be able to follow the steps of a deductive argument, but it takes something beyond that for deductive arguments to change your decisions.

I can imagine someone who can perform deductive reasoning, but such reasoning is only used to prepare material for explicit arguments for things they believe for other reasons. Deductive reasoning is part of a social reasoning game, not a broader decision-making.

Reductionism is another thing that might not feel real to many. You might see a doctor performing reductionistically-sensible actions like prescribing antibiotics to fight bacterial infection; to them, their mind relates to it more shamanistically. There’s a problem that’s solved by recruiting help from the person who is empowered to help with that kind of problem. It’s a social role thing. One becomes capable of dealing with a problem by going the the correct rites of passage. The details are just extra ambiance for the story.

I’m imagining a “things are made of parts” belief/​mental motion that some people have a lot more of than others, for some mix of natural inclination and some mix of that was a useful way to think about things for them.

What’s foreground, what’s background for you?

This is my old guess, but I do wonder whether the focus of your world is people with everything just a backdrop, or the world is a whole thing going on and people are just part of it. The former might set you up to always see things as Player vs Player, while the latter admits Player vs Environment.

If going through your life, you made progress always via succeeding in social ways, e.g., being popular, being liked, being cool, etc., then your mind learns to focus on the social stuff and the other doesn’t land. The non-social is real in the abstract, but it’s not hooked up to the decision-making that drives your actions.

In inverse of this is the socially oblivious. People who choose their actions for many reasons, but the reactions of others are not among them.

Implications

Arguments are different if you have two parties whose decisions are impacted by deductive chains of reasoning but who differ on some premises, than if you have two parties and one relates to deductive reasoning as something connected to decision-making and another sees it as part of a verbal game against foes.

One party might ask, “Is this true? Is this valid?” The other is running each argument through various filters and simulations for how it will sound to friends and foes. “How will this affect my alliances? What will the impact be on the undecided parties I want to court? Do I win in public opinion?”

At the observed level, you might have both parties discussing deductive arguments, but how they’re relating to them and what they’re doing them with them could be very different. And different at a deep and invisible level that neither side even sees what they’re taking for granted. Thoughts are surprisingly detailed and remarkably autonomous.

We’re in a world where a lot of people from the community who think like me are trying to persuade the world that AI is a major risk and certain policies are a good idea. Now, things are not all or nothing; they’re not binary here, but I think many or most of the people we’re interacting with are running different mental programs, learned over a lifetime. Neither of us is even aware of how different and strange the other’s inner mental life is to us.

If we want to figure out how to convince the world to be sane, it might help to better understand the kind of cognition they’re running and understand how our statements will land in the context of that.

This might be very hard. As above, I think the real differences are lower-level than the things it’s easy for me to point at. That makes it harder to imagine.

An analogy here is neurotransmitters and receptors. You might have heavily developed receptors for reductionistic arguments. They might not. In comparison, they’re receptors for social arguments are heavily developed. It might prove hard to really occupy the headspace of the other.

Fears of Symmetry

Scott Alexander kindly bequeathed the concept of Symmetric vs Asymmetric Weapons. Asymmetric weapons, like logical debate, are stronger on the side of truth than the side of falsehood. Symmetric weapons, like violence, are equally useful to both sides.

I am afraid that even if people all around are technically capable of logical debate, for some, this is an underused, if not functionally atrophied, pathway. Instead, the only way to influence them is with symmetric weapons. That’d be really awful. There’s no protection there.

Is there hope?

I’m persuaded that if you can just understand how something works, and understand its constituent pieces, and how they interact, then you can fix it ;)

No, but seriously. I don’t think modeling this better is a good first step. I also don’t think things are binary or fixed here. As I wrote in Same cognitive paints, exceedingly different mental pictures, to a large extent, I think people broadly are capable of the same mental motions; we’ve just learned to favor some over others. The learning is ongoing, and it’s possible to influence people’s choice of mental motion. There might be ways of framing arguments that get people to relate to some things as more real and decision-relevant than they did previously. Bump up the weights/​coefficients tied to helpful, asymmetric mental motions via altering the context in which the reasoning happens, and other similar approaches.

  1. ^

    I think fewer people are thinking about this, but also there are creatures like tortoises and sharks (and jellyfish?) with much longer lifespan than humans that are evidence for biology not inherently requiring the demise of the mind and flesh.

  2. ^

    In his book The AI Does Not Hate You, Tom Chivers recounts himself performing an Internal Double Crux with guidance from Anna Salamon.

    Anna Salamon: What’s the first thing that comes into your head when you think the phrase, “Your children won’t die of old age?”

    Tom Chivers: “The first thing that pops up, obviously, is I vaguely assume my children will die the way we all do. My grandfather died recently; my parents are in their sixties; I’m almost 37 now. You see the paths of a human’s life each time; all lives follow roughly the same path. They have different toys—iPhones instead of colour TVs instead of whatever—but the fundamental shape of a human’s life is roughly the same. But the other thing that popped is a sense “I don’t know how I can argue with it”, because I do accept that there’s a solid chance that AGI will arrive in the next 100 years. I accept that there’s a very high likelihood that if does happen then it will transform human life in dramatic ways—up to and including an end to people dying of old age, whether it’s because we’re all killed by drones with kinetic weapons, or uploaded into the cloud, or whatever. I also accept that my children will probably live that long, because they’re middle-class, well-off kinds from a Western country. All these these things add up to a very heavily non-zero chance that my children will not die of old age, but, they don’t square with my bucolic image of what humans do. They get older, they have kids, they have grandkids, and they die, and that’s the shape of life. Those are two fundamental things that came up, and they don’t square easily.

  3. ^

    I think people’s behavior here is much less crazy if they have religious beliefs that includes a persistent soul and afterlife. Although more insane if eternal torture is a possibility they believe and they’re not more worried about for them and others.

  4. ^

    I don’t think the evidence implies that death and aging are definitely curable, just that we have reason to think they might be. And that’s enough.

  5. ^

    I think AI is more urgent than aging/​death, but if I wasn’t working on AI, I might well be working on aging/​death.

  6. ^

    I did not come from a family with a continuous ultra-Orthodox heritage so the messages weren’t that strong in the home or from grandparents, etc.; my own family had its own journey and interactions with level of religiosity, but I got a lot of messaging at school.