I think that these are all pretty relevant ways to think about being an EA, but are mostly of a different fundamental type than the thing I’m pointing at. Let me get a bit more into the aforementioned math to show why this is approximately a binary categorization along the axis I was pointing at in this post.
Say that there are three possible world states:
I live a normal life and sit on the couch a lot. I take care of my plants.
I’m a highly engaged EA for many years and do a lot of verifiably altruistic things.
I’m a serial killer.
As a culture, on average, our values look something like:
EA > gardener > murderer
This is very reasonable and I don’t think we can or should change this, as a community.
I have some mental approximation of a utility function. One of the main differences between my internal representation and an actual utility function is that the point I choose as “zero utility” doesn’t matter in the formalism, but very much matters to my emotions. If we set EA=0 utility, then the myopic point-maximizing part of my brain feels okay if I do EA things, but awful if I’m getting negative points by being either of the other options. This is the moral obligation frame, where things are only barely emotionally okay if you push as far up the preference ordering as possible.
If we set gardener=0, then things feel emotionally okay if I just take the normal path. I’m not gaining or losing points. It’s then positively great if I do EA things and still positively bad if I kill people. This is the moral opportunity frame, and I find it emotionally much better for me. I predict that this frame is better for community health as well, although I have only vibes and anecdata to back me up on this claim.
There are several other points I have left unnamed:
murderer = 0: I really really don’t want to be this culture for reasons that are hopefully obvious.
gardener<0<EA: this is just a less extreme moral obligation framework, where you don’t need to do as much for things to be okay.
murderer<0<gardener: this is a more lenient moral opportunity frame, where doing some less-than-default-amount-of-good things occasionally is still okay. I think there are healthy and unhealthy versions of this
everything is >0: same as before, please don’t make this a culture
everything is <0: I also really don’t want this culture, because then everything anyone does is bad and this is not how you build a community.
Now that’s a lot of possibilities. I promised that I had “approximately a binary categorization”, so where’s the binary come in? Well, the dividing line I’m drawing is ultimately is “is being a normal member of society ‘okay’ or ‘not okay’?” Alternatively, we ask the question of how our community responds to Carol, the gardener. Are we friends with her? I say yes. Do we give her the same positive reinforcement for her new daffodils that we give to someone when they donate a large sum to EA charities? I say no.
(I am intentionally neglecting the various “evil” cultures from this consideration. Technically it’s not a binary if you want to include them, but I really don’t see why we would ever consider doing that in real life.)
These other frames you mention are important shards of a healthy EA community, I think. They’re just not quite the concept boundary I was trying to draw.
I think that these are all pretty relevant ways to think about being an EA, but are mostly of a different fundamental type than the thing I’m pointing at. Let me get a bit more into the aforementioned math to show why this is approximately a binary categorization along the axis I was pointing at in this post.
Say that there are three possible world states:
I live a normal life and sit on the couch a lot. I take care of my plants.
I’m a highly engaged EA for many years and do a lot of verifiably altruistic things.
I’m a serial killer.
As a culture, on average, our values look something like:
EA > gardener > murderer
This is very reasonable and I don’t think we can or should change this, as a community.
I have some mental approximation of a utility function. One of the main differences between my internal representation and an actual utility function is that the point I choose as “zero utility” doesn’t matter in the formalism, but very much matters to my emotions. If we set EA=0 utility, then the myopic point-maximizing part of my brain feels okay if I do EA things, but awful if I’m getting negative points by being either of the other options. This is the moral obligation frame, where things are only barely emotionally okay if you push as far up the preference ordering as possible.
If we set gardener=0, then things feel emotionally okay if I just take the normal path. I’m not gaining or losing points. It’s then positively great if I do EA things and still positively bad if I kill people. This is the moral opportunity frame, and I find it emotionally much better for me. I predict that this frame is better for community health as well, although I have only vibes and anecdata to back me up on this claim.
There are several other points I have left unnamed:
murderer = 0: I really really don’t want to be this culture for reasons that are hopefully obvious.
gardener<0<EA: this is just a less extreme moral obligation framework, where you don’t need to do as much for things to be okay.
murderer<0<gardener: this is a more lenient moral opportunity frame, where doing some less-than-default-amount-of-good things occasionally is still okay. I think there are healthy and unhealthy versions of this
everything is >0: same as before, please don’t make this a culture
everything is <0: I also really don’t want this culture, because then everything anyone does is bad and this is not how you build a community.
Now that’s a lot of possibilities. I promised that I had “approximately a binary categorization”, so where’s the binary come in? Well, the dividing line I’m drawing is ultimately is “is being a normal member of society ‘okay’ or ‘not okay’?” Alternatively, we ask the question of how our community responds to Carol, the gardener. Are we friends with her? I say yes. Do we give her the same positive reinforcement for her new daffodils that we give to someone when they donate a large sum to EA charities? I say no.
(I am intentionally neglecting the various “evil” cultures from this consideration. Technically it’s not a binary if you want to include them, but I really don’t see why we would ever consider doing that in real life.)
These other frames you mention are important shards of a healthy EA community, I think. They’re just not quite the concept boundary I was trying to draw.