an analogy between longtermism and lifespan extension
One proposition of longtermism is that the extinction of humanity would be especially tragic not just because of the number of people alive today who would die, but because this would eliminate the possibility for astronomical numbers of future people to exist. This implies that humanity should reprioritize resources away from solving near term problems and towards safeguarding itself from extinction.[1]
On a personal level, I have a chance of living to experience the longevity escape velocity, at which point anti-aging technology would be so advanced that I would only die due to accidents rather than natural factors. I may live for thousands of years, and these years would be much better than my current life because of improvements in general quality of life. Analogous to the potential of many future generations, this future would be so awesome for me that I should be willing to sacrifice a lot to increase the chance that it happens.
I could follow a version of Bryan Johnson’s “Blueprint” lifestyle for around $12,000 per year, which he designed to slow or reverse aging as much as possible. This might not be worth it. Suppose this protocol would extend my expected lifespan by 20%, but the extra $12,000 per year, if spent elsewhere, would increase my quality of life by 30%. This would mean I could gain more (quality of life × lifespan) by spending that money elsewhere.[2]
However, lifestyle interventions which, if I followed for the rest of my life, would increase my expected lifespan by 20%, would actually increase my expected lifespan by much more than 20% because our knowledge of how to extend lifespan increases as time passes. In other words, spending money on lifestyle interventions to promote longevity instead of quality of life increases the chance that I live to experience longevity escape velocity, so it may be worth it.
society spending resources on neartermist issues : me spending money on immediate quality of life :: society spending money on longtermist issues : me spending money on lifespan extension
It’s always interesting to see what type of people are interested in longevity. Most people would like to have longevity, but some people are obsessed. I wonder if historically people viewed their children as a copy of themselves compared to now. It seems like people had similar lives for multiple generations in the past compared to the social and geographical mobility we now enjoy. Does this detach us from our children existentially? Also what kind of people would view others as a viable path to their own gene propagation, and what kind of people wouldn’t see others as the same but rather as a competition to themselves?
I wonder if historically people viewed their children as a copy of themselves compared to now.
I would guess that both in the past and now, some people see their children as copies of themselves, and some do not. (Though it is possible that the relative numbers have changed.) Seems to be more about personality traits than about… calendar.
It does provide an alternative to having kids as a way of self-extension. They should, in my view, be seen as deeply related, so long as the parent makes enough memetic work to fully encode their personality. I wouldn’t mind being a trill. But it is an immense loss to lose the contents of a mind. My offspring should have my knowledge available, as a trill would. And I’d like my knowledge to be available to anyone. In the meantime, I’d still like my form and personality to continue as one for much longer than humans historically have, and I’d like the same for both my children and everyone’s children. We can extend lifespan very significantly without messing up the replicator equation, if we also create technologies for dramatically more efficient (lower temperature) forms of life than biology. When true ascension is possible, my family will be deep space extropians, every part of the body participating in brainlike computation and every part of the body an intelligence-directed work of art, not live on the surface of planets.
Ideally, a competitive market would drive the price of goods close to the price of production, rather than the price that maximizes revenue. Unfortunately, some mechanisms prevent this.
One is the exploitation of the network effect, where a good is more valuable simply because more people use it. For example, a well-designed social media platform is useless if it has no users, and a terrible addictive platform can be useful if it has many users (Twitter).
This makes it difficult to break into a market and gives popular services the chance to charge what people will pay instead of the minimum amount required to keep the lights on. Some would say this is the price that things should be, but I disagree. Life should be less expensive for consumers, and diabetic people shouldn’t need to pay an arm and a leg for insulin.
Or maybe I’m just seething that I just willingly paid $40 for a month’s access to a dating app’s “premium” tier 🤢.
Yes, the increased chance that I find a good person to date in the next month is worth ≥$40 to me. It’s still the most efficient way to discover and filter through other single people near me. But I doubt it costs this much to maintain a dating app, even considering that the majority of people don’t pay for the premium tier.
The other thing that irks me about the network effect is that I don’t always like the thing that matches the puclic’s revealed preferences. I think this dating app is full of dark patterns – UI tricks that make it as addicting as possible. And it encourages shallow judgement of people. I would truly rather see people’s bio front and center, rather than their face, and I want them to have more space to talk about themselves. I wish I could just fill out a survey on what I’m looking for and be matched with the right person. Alas, OKCupid has fallen out of fashion, so instead I must dodge dark patterns and scroll past selfies because human connection has been commercialized.
Most of the “mechanisms which prevent competitive pricing” is monopoly. Network effect is “just” a natural monopoly, where the first success gains so much ground that competitors can’t really get a start. Another curiosity is the difference between average cost and marginal cost. One more user does not cost $40. But, especially in growth mode, the average cost per user (of your demographic) is probably higher than you think—these sites are profitable, but not amazingly so.
None of this invalidates your anger at the inadequacy of the modern dating equilibrium. I sympathize that you don’t have parents willing to arrange your marriage and save you the hassle.
I didn’t know about either of those concepts (network effects being classified as a natural monopoly and the average vs. marginal cost). Thanks!
While I am frustrated by the current dating landscape, I think dating apps are probably a net positive – before they were popular, it was impossible to discover as many people. And while arranged marriages probably have the same level of satisfaction as freely chosen marriages, I’m glad that I have to find my own partner. It adds to my life a sense of exploration and uncertainty, incentivizes me to work on becoming more confident/attractive, and helps me meet more cool people as friends.
Concision is especially important for public speakers
If I was going to give a talk in front of 200 people, it being 1 minute unnecessarily less consise wastes ~3 hours of the audience’s time in total, so I should be willing to spend up to 3 hours to change that.
Most people consider doing 30 practice runs for a talk to be absurd, a totally obsessive amount of practice, but I think Gary Bernhardt has it right when he says that, if you’re giving a 30-minute talk to a 300 person audience, that’s 150 person-hours watching your talk, so it’s not obviously unreasonable to spend 15 hours practicing (and 30 practice runs will probably be less than 15 hours since you can cut a number of the runs short and/or repeatedly practice problem sections).
Maybe someone should make a dating app for effective altruists, where people can indicate which organizations they work for / which funds they receive, and conflicts of interest are automatically detected. Potential solution to the conflict between professional and romantic relationships in this weird community. Other ideas:
Longer profiles, akin to dating docs
Calendly date button
OK Cupid-style matching algorithm, complete with data such as preferred cause area
Tools for visualizing your polycule graph
Built-in bounties or prediction markets to incentize people to match-make
A feature which is just a clone of reciprocity.io, where you can anonymously indicate who you’d be open to dating and if two people indicate each other, they both get notified
This is half a joke and half serious. At least it’s an interesting design challenge. How would you design the ideal dating app for a unique community without traditional constraints like “must be gamified to make people addicted”, “needs a way to be profitable”, “must overcome network effects”, and “users aren’t open-minded to strange features”?
Did you know that caffeine is not a magical property of certain drinks? It’s just a chemical you can buy in pill form. You don’t “need coffee in the morning”, you need caffeine. So if you don’t like the taste or the preparation time or the price or that it stains your teeth and messes with your digestion, you can just supplement the caffeine and drink something healthier. And don’t get me started on energy drinks...
Plus, caffeine pills give you more control than caffeinated drinks. Unlike coffee or tea, you know exactly how much caffeine is in each dose, and you can buy “extended release” pills which have a smoother peak. Many drink green tea because it contains L-theanine, which reduces jitteryness, but it doesn’t have the ideal amount of L-theanine proportional to the caffeine. Fear not, because L-theanine isn’t magical either! You can also buy pills that give you the right dosage.
Caffeinated drinks are a front! They’re a sham! Peel back the curtain! Free your mind! :P
Gaining social status and becoming more attractive are useful goals because more attractive and higher-status people are treated better (see Halo Effect) and because they increase one’s self-confidence. But the desire to improve on these fronts is seen as vain, as a negative virtue. I think there are three reasons this could be:
Our evolutionary ancestors who sought status were more likely to have children, so this desire is biologically hardwired into us. Someone who takes action to increase their status might be doing so out of a cold, rational calculation that says it would help them to achieve their goals, but it’s more likely that they’re doing it because it feels good.
Often, out of this irrational motivation, people take actions which increase their status but aren’t useful in achieving their goals. For example, buying a fancy, expensive car probably isn’t worth the money because there are more efficient ways to convert money into status and it might make one come off as extravagant and douchey instead of classy.
Status and attractiveness are zero-sum games because they only mean anything in relation to other people. Everyone buying cosmetic plastic surgery would be a waste of surgeons’ time because humanity wouldn’t be better off overall.[1] This means that spending resources to move up the status ladder is like defecting against the rest of humanity (see tragedy of the commons). To prevent people from defecting, society established a norm of judging negatively those who are clearly chasing status, and so people have to be sneaky about it. They either have to have some sort of plausible deniability (“I only bought these expensive clothes because I like how they look, not because I’m chasing status”) or a genuine reason other than “I want people to treat me better” (“I only got plastic surgery because my unattractiveness was having an especially negative effect on my mental health”).
So, here’s the result: someone who rationally chases status and makes themselves more attractive in order to better achieve their goals, even if altruistic, is seen instead as someone succumbing to their natural instinct and defecting against the societal norm of not playing the status game.
Although, I’ve heard the argument that if beauty is intrinsically valuable, humanity would be better off if everyone bought plastic surgery because there would be more beauty in the world.
Conflates attractiveness and status. Talks about them as if they have the same properties, which might not be the case.
Do people actually see the pursuit of increased attractiveness and status negatively? For example, if someone said “I want to go to the gym to look better”, I think that would be seen as admirable self-improvement, not vanity.
Is the norm of judging vanity negatively actually a result of the zero-sum game? I don’t know enough about sociology to know how societal norms form.
Is irrational vanity actually common? Maybe doing things like buying extravagant cars is less common than I think, or these sorts of acts are more “rationally vain” than I think, i.e. they really are cost-effective ways to increase status.
I try to avoid most social media because it’s addicting and would probably have negative effects on my self-image. But I’ve found it motivating to join social media websites that are catered to positive habits: Strava and Goodreads leverage my innate desire for status and validation to make me run and read more often. They still have dark patterns, and probably still negatively affect my self-image a bit, but I think my use of them is net positive.
Originally, I wanted to write this as a piece of advice, as in “you should consider using social media to motivate yourself in positive habits”, but I tentatively think this is a bad writing habit of mine and that I should be more humble until I’m more confident about my conclusions. What do you think?
altruism is part of my self-improvement feedback loop
One critique of utilitarianism is that if you seriously use it to guide your decisions, you would find that for any given decision, the choice that maximizes overall wellbeing is usually not the one that does any good for your personal wellbeing, so you would turn into a “happiness pump”: someone who only generates happiness for others at the detriment of themself. And, wouldn’t you know it, we see people like this pop up in the effective altruism movement (whose philosophy stems mostly from utilitarianism), particularly those who pursue earn-to-give. While most are happy to give away 10% of their income to effective charities, I’ve heard of some who have taken it to the extreme, to the point of calculating every purchase they make in terms of days of life they could have counterfactually saved via a donation.
However, since its beginnings, EA has shifted its focus away from earning to give and closer to encouraging people to pursue careers where they can work directly on the world’s most important problems. For someone with the privilege to consider this kind of career path, I believe this has changed the incentives and made the pursuit of self-fullfillment more closely aligned with maximizing expected utility.
the self-improvement feedback loop
Self-improvement is a feedback loop, or rather, a complicated web of feedback loops. For example,
The happier you are, the more productive you are, the more money you make, the happier you are.
The more often you exercise, the better your mental health, the better your executive function, the less often you skip your workouts, the more often you exercise.
The more often you exercise, the stronger you become, the more attractive you become, the more you benefit from the halo effect, the more likely you are to get a promotion, the more money you make.
In my constant battle to nudge this loop in the right direction, I don’t see altruism as a nagging enemy, who would take away energy I could use to get ahead. Rather, I see it as part of the loop.
Learning about the privilege I have (not only in the US but also globally) and how I can meaningful leverage that privilege as an opportunity to help massive numbers of poorer-off people has given me an incredible amount of motivation to better myself. Before I discovered EA, my plan was to become a software developer and retire as early as possible. Great life plan, don’t get me wrong – but when I learned I could take a shot at solving the world’s most important problems, I realized it was a super lame and selfish waste of privilege in comparison.
Instead of thinking about “how do I make as much money as possible?”, I now think about
How do I form accurate beliefs about the world?
What does the world look like, where will it be in the future, and where can I fit in to make it better?
Which professional skills are the best fit for me and the most important for having a positive impact?
How do I become as productive and agentic as possible?
Notice how this differs from the happiness pump situation. It’s more focused on “improving the self to help others” than “sacrificing personal wellbeing to help others”. This paradigm shift in what it looks like to try to do as much good as possible brings altruism into the self-improvement feedback loop. It gives my life a sense of meaning, something to work towards. Altruism isn’t a diametrically opposed goal to personal fulfillment; it’s mostly aligned.
The happier you are, the more productive you are, the more money you make, the happier you are. The more often you exercise, the better your mental health, the better your executive function, the less often you skip your workouts, the more often you exercise. The more often you exercise, the stronger you become, the more attractive you become, the more you benefit from the halo effect, the more likely you are to get a promotion, the more money you make.
And this effect could be even stronger in a group. In addition to the individual loops, seeing other people happy makes you happy, seeing other people productive inspired you to do something productive, people in the group could help each other financially, exercise together, etc.
One critique of utilitarianism is that if you seriously use it to guide your decisions, you would find that for any given decision, the choice that maximizes overall wellbeing is usually not the one that does any good for your personal wellbeing, so you would turn into a “happiness pump”: someone who only generates happiness for others at the detriment of themself.
I think this only happens if you take an overly restictive/narrow/naive view of consequences. Humans are generally not productive if they’re not happy, so the happiness pump strategy is probably not actually good for the net well being of other people longterm.
I agree, maybe I should state that overtly in this post. It’s essentially an argument against the idea of a happiness pump, because of the reason you described.
On redundancy: “two is one, one is none”. It’s best to have copies of critical things in case they break or go missing, e.g. an extra cell phone.
On authentication: “something you know, have, and are”. These are three categories of ways you can authenticate yourself.
Something you know: password, PIN
Something you have: key, phone with 2FA keys, YubiKey
Something you are: fingerprint, facial scan, retina scan
On backups: the “3-2-1” strategy.
Maintain 3 copies of your data:
2 on-site but on different media (e.g. on your laptop and on an external drive) and
1 off-site (e.g. in the cloud).
Inspired by these concepts, I propose the “2/3” model for authentication:
Maintain at least three ways you can access a system (something you have, know, and are). If you can authenticate yourself using at least 2 out of the 3 ways, you’re allowed to access the system.
This prevents both false positives (hackers need to breach at least two methods of authentication) and false negatives (you don’t have to prove yourself using all methods). It provides redundancy on both fronts.
an analogy between longtermism and lifespan extension
One proposition of longtermism is that the extinction of humanity would be especially tragic not just because of the number of people alive today who would die, but because this would eliminate the possibility for astronomical numbers of future people to exist. This implies that humanity should reprioritize resources away from solving near term problems and towards safeguarding itself from extinction.[1]
On a personal level, I have a chance of living to experience the longevity escape velocity, at which point anti-aging technology would be so advanced that I would only die due to accidents rather than natural factors. I may live for thousands of years, and these years would be much better than my current life because of improvements in general quality of life. Analogous to the potential of many future generations, this future would be so awesome for me that I should be willing to sacrifice a lot to increase the chance that it happens.
I could follow a version of Bryan Johnson’s “Blueprint” lifestyle for around $12,000 per year, which he designed to slow or reverse aging as much as possible. This might not be worth it. Suppose this protocol would extend my expected lifespan by 20%, but the extra $12,000 per year, if spent elsewhere, would increase my quality of life by 30%. This would mean I could gain more (quality of life × lifespan) by spending that money elsewhere.[2]
However, lifestyle interventions which, if I followed for the rest of my life, would increase my expected lifespan by 20%, would actually increase my expected lifespan by much more than 20% because our knowledge of how to extend lifespan increases as time passes. In other words, spending money on lifestyle interventions to promote longevity instead of quality of life increases the chance that I live to experience longevity escape velocity, so it may be worth it.
society spending resources on neartermist issues : me spending money on immediate quality of life :: society spending money on longtermist issues : me spending money on lifespan extension
And ensuring that the project of populating the universe goes well. E.g., preventing S-risks.
This also means less money to donate to charity and/or less slack to be in a position to work directly on the world’s important problems.
It’s always interesting to see what type of people are interested in longevity. Most people would like to have longevity, but some people are obsessed. I wonder if historically people viewed their children as a copy of themselves compared to now. It seems like people had similar lives for multiple generations in the past compared to the social and geographical mobility we now enjoy. Does this detach us from our children existentially? Also what kind of people would view others as a viable path to their own gene propagation, and what kind of people wouldn’t see others as the same but rather as a competition to themselves?
I would guess that both in the past and now, some people see their children as copies of themselves, and some do not. (Though it is possible that the relative numbers have changed.) Seems to be more about personality traits than about… calendar.
It does provide an alternative to having kids as a way of self-extension. They should, in my view, be seen as deeply related, so long as the parent makes enough memetic work to fully encode their personality. I wouldn’t mind being a trill. But it is an immense loss to lose the contents of a mind. My offspring should have my knowledge available, as a trill would. And I’d like my knowledge to be available to anyone. In the meantime, I’d still like my form and personality to continue as one for much longer than humans historically have, and I’d like the same for both my children and everyone’s children. We can extend lifespan very significantly without messing up the replicator equation, if we also create technologies for dramatically more efficient (lower temperature) forms of life than biology. When true ascension is possible, my family will be deep space extropians, every part of the body participating in brainlike computation and every part of the body an intelligence-directed work of art, not live on the surface of planets.
Ideally, a competitive market would drive the price of goods close to the price of production, rather than the price that maximizes revenue. Unfortunately, some mechanisms prevent this.
One is the exploitation of the network effect, where a good is more valuable simply because more people use it. For example, a well-designed social media platform is useless if it has no users, and a terrible addictive platform can be useful if it has many users (Twitter).
This makes it difficult to break into a market and gives popular services the chance to charge what people will pay instead of the minimum amount required to keep the lights on. Some would say this is the price that things should be, but I disagree. Life should be less expensive for consumers, and diabetic people shouldn’t need to pay an arm and a leg for insulin.
Or maybe I’m just seething that I just willingly paid $40 for a month’s access to a dating app’s “premium” tier 🤢.
Yes, the increased chance that I find a good person to date in the next month is worth ≥$40 to me. It’s still the most efficient way to discover and filter through other single people near me. But I doubt it costs this much to maintain a dating app, even considering that the majority of people don’t pay for the premium tier.
The other thing that irks me about the network effect is that I don’t always like the thing that matches the puclic’s revealed preferences. I think this dating app is full of dark patterns – UI tricks that make it as addicting as possible. And it encourages shallow judgement of people. I would truly rather see people’s bio front and center, rather than their face, and I want them to have more space to talk about themselves. I wish I could just fill out a survey on what I’m looking for and be matched with the right person. Alas, OKCupid has fallen out of fashion, so instead I must dodge dark patterns and scroll past selfies because human connection has been commercialized.
Most of the “mechanisms which prevent competitive pricing” is monopoly. Network effect is “just” a natural monopoly, where the first success gains so much ground that competitors can’t really get a start. Another curiosity is the difference between average cost and marginal cost. One more user does not cost $40. But, especially in growth mode, the average cost per user (of your demographic) is probably higher than you think—these sites are profitable, but not amazingly so.
None of this invalidates your anger at the inadequacy of the modern dating equilibrium. I sympathize that you don’t have parents willing to arrange your marriage and save you the hassle.
I didn’t know about either of those concepts (network effects being classified as a natural monopoly and the average vs. marginal cost). Thanks!
While I am frustrated by the current dating landscape, I think dating apps are probably a net positive – before they were popular, it was impossible to discover as many people. And while arranged marriages probably have the same level of satisfaction as freely chosen marriages, I’m glad that I have to find my own partner. It adds to my life a sense of exploration and uncertainty, incentivizes me to work on becoming more confident/attractive, and helps me meet more cool people as friends.
Or maybe I’m just rationalizing.
Concision is especially important for public speakers
If I was going to give a talk in front of 200 people, it being 1 minute unnecessarily less consise wastes ~3 hours of the audience’s time in total, so I should be willing to spend up to 3 hours to change that.
In 95%-ile isn’t that good, Dan Luu writes:
Maybe someone should make a dating app for effective altruists, where people can indicate which organizations they work for / which funds they receive, and conflicts of interest are automatically detected. Potential solution to the conflict between professional and romantic relationships in this weird community. Other ideas:
Longer profiles, akin to dating docs
Calendly date button
OK Cupid-style matching algorithm, complete with data such as preferred cause area
Tools for visualizing your polycule graph
Built-in bounties or prediction markets to incentize people to match-make
A feature which is just a clone of reciprocity.io, where you can anonymously indicate who you’d be open to dating and if two people indicate each other, they both get notified
This is half a joke and half serious. At least it’s an interesting design challenge. How would you design the ideal dating app for a unique community without traditional constraints like “must be gamified to make people addicted”, “needs a way to be profitable”, “must overcome network effects”, and “users aren’t open-minded to strange features”?
Did you know that caffeine is not a magical property of certain drinks? It’s just a chemical you can buy in pill form. You don’t “need coffee in the morning”, you need caffeine. So if you don’t like the taste or the preparation time or the price or that it stains your teeth and messes with your digestion, you can just supplement the caffeine and drink something healthier. And don’t get me started on energy drinks...
Plus, caffeine pills give you more control than caffeinated drinks. Unlike coffee or tea, you know exactly how much caffeine is in each dose, and you can buy “extended release” pills which have a smoother peak. Many drink green tea because it contains L-theanine, which reduces jitteryness, but it doesn’t have the ideal amount of L-theanine proportional to the caffeine. Fear not, because L-theanine isn’t magical either! You can also buy pills that give you the right dosage.
Caffeinated drinks are a front! They’re a sham! Peel back the curtain! Free your mind! :P
rational vanity
epistemic status: literal shower thought, uncertain
Gaining social status and becoming more attractive are useful goals because more attractive and higher-status people are treated better (see Halo Effect) and because they increase one’s self-confidence. But the desire to improve on these fronts is seen as vain, as a negative virtue. I think there are three reasons this could be:
Our evolutionary ancestors who sought status were more likely to have children, so this desire is biologically hardwired into us. Someone who takes action to increase their status might be doing so out of a cold, rational calculation that says it would help them to achieve their goals, but it’s more likely that they’re doing it because it feels good.
Often, out of this irrational motivation, people take actions which increase their status but aren’t useful in achieving their goals. For example, buying a fancy, expensive car probably isn’t worth the money because there are more efficient ways to convert money into status and it might make one come off as extravagant and douchey instead of classy.
Status and attractiveness are zero-sum games because they only mean anything in relation to other people. Everyone buying cosmetic plastic surgery would be a waste of surgeons’ time because humanity wouldn’t be better off overall.[1] This means that spending resources to move up the status ladder is like defecting against the rest of humanity (see tragedy of the commons). To prevent people from defecting, society established a norm of judging negatively those who are clearly chasing status, and so people have to be sneaky about it. They either have to have some sort of plausible deniability (“I only bought these expensive clothes because I like how they look, not because I’m chasing status”) or a genuine reason other than “I want people to treat me better” (“I only got plastic surgery because my unattractiveness was having an especially negative effect on my mental health”).
So, here’s the result: someone who rationally chases status and makes themselves more attractive in order to better achieve their goals, even if altruistic, is seen instead as someone succumbing to their natural instinct and defecting against the societal norm of not playing the status game.
Although, I’ve heard the argument that if beauty is intrinsically valuable, humanity would be better off if everyone bought plastic surgery because there would be more beauty in the world.
Potential issues with this thought:
Conflates attractiveness and status. Talks about them as if they have the same properties, which might not be the case.
Do people actually see the pursuit of increased attractiveness and status negatively? For example, if someone said “I want to go to the gym to look better”, I think that would be seen as admirable self-improvement, not vanity.
Is the norm of judging vanity negatively actually a result of the zero-sum game? I don’t know enough about sociology to know how societal norms form.
Is irrational vanity actually common? Maybe doing things like buying extravagant cars is less common than I think, or these sorts of acts are more “rationally vain” than I think, i.e. they really are cost-effective ways to increase status.
I try to avoid most social media because it’s addicting and would probably have negative effects on my self-image. But I’ve found it motivating to join social media websites that are catered to positive habits: Strava and Goodreads leverage my innate desire for status and validation to make me run and read more often. They still have dark patterns, and probably still negatively affect my self-image a bit, but I think my use of them is net positive.
Originally, I wanted to write this as a piece of advice, as in “you should consider using social media to motivate yourself in positive habits”, but I tentatively think this is a bad writing habit of mine and that I should be more humble until I’m more confident about my conclusions. What do you think?
altruism is part of my self-improvement feedback loop
One critique of utilitarianism is that if you seriously use it to guide your decisions, you would find that for any given decision, the choice that maximizes overall wellbeing is usually not the one that does any good for your personal wellbeing, so you would turn into a “happiness pump”: someone who only generates happiness for others at the detriment of themself. And, wouldn’t you know it, we see people like this pop up in the effective altruism movement (whose philosophy stems mostly from utilitarianism), particularly those who pursue earn-to-give. While most are happy to give away 10% of their income to effective charities, I’ve heard of some who have taken it to the extreme, to the point of calculating every purchase they make in terms of days of life they could have counterfactually saved via a donation.
However, since its beginnings, EA has shifted its focus away from earning to give and closer to encouraging people to pursue careers where they can work directly on the world’s most important problems. For someone with the privilege to consider this kind of career path, I believe this has changed the incentives and made the pursuit of self-fullfillment more closely aligned with maximizing expected utility.
the self-improvement feedback loop
Self-improvement is a feedback loop, or rather, a complicated web of feedback loops. For example,
The happier you are, the more productive you are, the more money you make, the happier you are.
The more often you exercise, the better your mental health, the better your executive function, the less often you skip your workouts, the more often you exercise.
The more often you exercise, the stronger you become, the more attractive you become, the more you benefit from the halo effect, the more likely you are to get a promotion, the more money you make.
It all feeds into itself. Maybe this is just another way of phrasing the effect of accumulated advantage.
let’s throw altruism into the loop
In my constant battle to nudge this loop in the right direction, I don’t see altruism as a nagging enemy, who would take away energy I could use to get ahead. Rather, I see it as part of the loop.
Learning about the privilege I have (not only in the US but also globally) and how I can meaningful leverage that privilege as an opportunity to help massive numbers of poorer-off people has given me an incredible amount of motivation to better myself. Before I discovered EA, my plan was to become a software developer and retire as early as possible. Great life plan, don’t get me wrong – but when I learned I could take a shot at solving the world’s most important problems, I realized it was a super lame and selfish waste of privilege in comparison.
Instead of thinking about “how do I make as much money as possible?”, I now think about
How do I form accurate beliefs about the world?
What does the world look like, where will it be in the future, and where can I fit in to make it better?
Which professional skills are the best fit for me and the most important for having a positive impact?
How do I become as productive and agentic as possible?
Notice how this differs from the happiness pump situation. It’s more focused on “improving the self to help others” than “sacrificing personal wellbeing to help others”. This paradigm shift in what it looks like to try to do as much good as possible brings altruism into the self-improvement feedback loop. It gives my life a sense of meaning, something to work towards. Altruism isn’t a diametrically opposed goal to personal fulfillment; it’s mostly aligned.
And this effect could be even stronger in a group. In addition to the individual loops, seeing other people happy makes you happy, seeing other people productive inspired you to do something productive, people in the group could help each other financially, exercise together, etc.
Yeah definitely! It gets even more complicated when you throw other humans in the loop (pun not intended).
I think this only happens if you take an overly restictive/narrow/naive view of consequences. Humans are generally not productive if they’re not happy, so the happiness pump strategy is probably not actually good for the net well being of other people longterm.
I agree, maybe I should state that overtly in this post. It’s essentially an argument against the idea of a happiness pump, because of the reason you described.
Three related concepts.
On redundancy: “two is one, one is none”. It’s best to have copies of critical things in case they break or go missing, e.g. an extra cell phone.
On authentication: “something you know, have, and are”. These are three categories of ways you can authenticate yourself.
Something you know: password, PIN
Something you have: key, phone with 2FA keys, YubiKey
Something you are: fingerprint, facial scan, retina scan
On backups: the “3-2-1” strategy.
Maintain 3 copies of your data:
2 on-site but on different media (e.g. on your laptop and on an external drive) and
1 off-site (e.g. in the cloud).
Inspired by these concepts, I propose the “2/3” model for authentication:
This prevents both false positives (hackers need to breach at least two methods of authentication) and false negatives (you don’t have to prove yourself using all methods). It provides redundancy on both fronts.
This was originally a comment, found here.