I also like the quote. I consider meaning and fulfillment of life goals morally important, so I’m against one-dimensional approaches to ethics.
However, I think it’s a bit unfair that just because the quote talks about suffering (and not pleasure/positive experience), you then go on to talk exclusively about suffering-focused ethics.
Firstly, “suffering-focused ethics” is an umbrella term that encompasses several moral views, including very much pluralistic ones (see the start of the Wikipedia article or the start of this initial post).
Second, even if (as I do from here on) we assume that you’re talking about “exclusively suffering-focused views/axiologies,” which I concede make up a somewhat common minority of views in EA at large and among suffering-focused views in particular, I’d like to point out that the same criticism (of “map-and-territory confusion”) applies just as much, if not more strongly, against classical hedonistic utilitarian views. I would also argue that classical hedonistic utilitarianism has had, at least historically, more influence among EAs and that it describes better where SBF himself was coming from (not that we should give much weight to this last bit).
To elaborate, I would say the “failure” (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the “failure” of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.
The IMO best answer to “What constitutes (morally relevant) suffering?” is something that’s always important to the being that suffers. I.e., suffering is always bad (or, in its weakest forms, suboptimal) from the perspective of the being that suffers. I would define suffering as an experienced need to change something about one’s current experience. (Or end said experience, in the case of extreme suffering.)
(Of course, not everyone who subscribes to a form of suffering-focused ethics would see it that way – e.g., people who see the experience of pain asymbolia as equally morally disvaluable as what we ordinary call “pain” have a different conception of suffering. Similarly, I’m not sure whether Brian Tomasik’s pan-everythingism about everything would give the same line of reasoning as I would for caring a little about “electron suffering,” or whether this case is so different and unusual that we have to see it as essentially a different concept.)
And, yeah, bringing to our mind the distinction between map and territory, when we focus on the suffering beings and not the suffering itself, we can see that there are some sentient beings (“moral persons” according to Singer) to whom things other than their experiences can be important.
Still, I think the charge “you confuse the map for the territory, the measure for the man, the math with reality” sticks much better against classical hedonistic utilitarianism. After all, take the classical utilitarian’s claim “pleasure is good.” I’ve written about this in a short form on the EA forum. As I would summarize it now, when we talk about “pleasure is good,” there are two interpretations behind this that can be used for motte-and-bailey. I will label these two claims “uncontroversial” and “controversial.” Note how the uncontroversial claim has only vague implications, whereas the controversial one has huge and precise implications (maximizing hedonist axiology).
(1) Uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is what we higher-order desire.
This uncontroversial claim is compatible with “other things also matter morally.”
(For comparison, the uncontroversial interpretation for “suffering is bad” is “all else equal, suffering is always [at least a bit] objectionable, and often something we higher-order desire against.”)
(2) Controversial claim: When we say that pleasure is good, what we mean is that we ought to be personal hedonist maximizers. This includes claims like “all else equal, more pleasure is always better than less pleasure,” among a bunch of other things.
“All else equal, more pleasure is always better than less pleasure” seems false. At the very least, it’s really controversial (that’s why it’s not part of the the uncontroversial claim, where it just says “pleasure is always unobjectionable.”)
When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
By contrast, “all else equal, more suffering is always worse than less suffering” seems to check out – that’s part of the uncontroversial interpretation of “suffering is bad.”
So, “more suffering is always worse” is uncontroversial, while “more intensity of positive experience is always better (in a sense that matters morally and is worth tradeoffs)” is controversial.
That’s why I said the following earlier on in my comment here:
I would say the “failure” (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the “failure” of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.
But “maximize hedons” isn’t.
The point to notice for proponents of an exclusively suffering-focused axiology is that humans have two motivational systems, not just the system-1 motivation that I see as being largely about the prevention of short-term cravings/suffering. Next to that, there’s also also higher-order, “reflective” desires. These reflective desires are often (though not in everyone) about (specific forms of) happiness or things other than experiences (or, as a perhaps better way to express this, they are also about how specific experiences are embedded in the world, their contact with reality.)
When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA). But I think that function is extremely complex, dependent on one’s entire lifetime, and not simply reducible to basic proxies like pleasure or pain.
I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
That seems like a misunderstanding – I didn’t mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it’s a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states
Isn’t this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it’s better for you to feel like you’re doing more good than to actually do good. It’s better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it’s all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist “preference-affecting” views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn’t think that there’s a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there’s no experiment that you could ever possibly do to distinguish between two situations, then I don’t think that those two situations should be morally distinct.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you’re very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word “pizza” is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they’re not conscious. You just won’t, because you’ll never question it. If value is totally subjective and the accuracy of beliefs doesn’t matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it’s better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn’t find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don’t find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a “meaning” shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it’s apparently in their own best interests, but I think those don’t capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart’s law to me, like hedonism might (?) seem like an example of Goodhart’s law to you.
I don’t think people and their values are in general replaceable, and if they don’t want to be manipulated, it’s worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don’t think it’s necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn’t be worse. It’s the fact that people do specifically care about things beyond just the contents of their experiences. If they didn’t, and also didn’t care about being manipulated, then it seems like it wouldn’t necessarily be bad to manipulate them.
I also like the quote. I consider meaning and fulfillment of life goals morally important, so I’m against one-dimensional approaches to ethics.
However, I think it’s a bit unfair that just because the quote talks about suffering (and not pleasure/positive experience), you then go on to talk exclusively about suffering-focused ethics.
Firstly, “suffering-focused ethics” is an umbrella term that encompasses several moral views, including very much pluralistic ones (see the start of the Wikipedia article or the start of this initial post).
Second, even if (as I do from here on) we assume that you’re talking about “exclusively suffering-focused views/axiologies,” which I concede make up a somewhat common minority of views in EA at large and among suffering-focused views in particular, I’d like to point out that the same criticism (of “map-and-territory confusion”) applies just as much, if not more strongly, against classical hedonistic utilitarian views. I would also argue that classical hedonistic utilitarianism has had, at least historically, more influence among EAs and that it describes better where SBF himself was coming from (not that we should give much weight to this last bit).
To elaborate, I would say the “failure” (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the “failure” of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.
The IMO best answer to “What constitutes (morally relevant) suffering?” is something that’s always important to the being that suffers. I.e., suffering is always bad (or, in its weakest forms, suboptimal) from the perspective of the being that suffers. I would define suffering as an experienced need to change something about one’s current experience. (Or end said experience, in the case of extreme suffering.)
(Of course, not everyone who subscribes to a form of suffering-focused ethics would see it that way – e.g., people who see the experience of pain asymbolia as equally morally disvaluable as what we ordinary call “pain” have a different conception of suffering. Similarly, I’m not sure whether Brian Tomasik’s pan-everythingism about everything would give the same line of reasoning as I would for caring a little about “electron suffering,” or whether this case is so different and unusual that we have to see it as essentially a different concept.)
And, yeah, bringing to our mind the distinction between map and territory, when we focus on the suffering beings and not the suffering itself, we can see that there are some sentient beings (“moral persons” according to Singer) to whom things other than their experiences can be important.
Still, I think the charge “you confuse the map for the territory, the measure for the man, the math with reality” sticks much better against classical hedonistic utilitarianism. After all, take the classical utilitarian’s claim “pleasure is good.” I’ve written about this in a short form on the EA forum. As I would summarize it now, when we talk about “pleasure is good,” there are two interpretations behind this that can be used for motte-and-bailey. I will label these two claims “uncontroversial” and “controversial.” Note how the uncontroversial claim has only vague implications, whereas the controversial one has huge and precise implications (maximizing hedonist axiology).
(1) Uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is what we higher-order desire.
This uncontroversial claim is compatible with “other things also matter morally.”
(For comparison, the uncontroversial interpretation for “suffering is bad” is “all else equal, suffering is always [at least a bit] objectionable, and often something we higher-order desire against.”)
(2) Controversial claim: When we say that pleasure is good, what we mean is that we ought to be personal hedonist maximizers. This includes claims like “all else equal, more pleasure is always better than less pleasure,” among a bunch of other things.
“All else equal, more pleasure is always better than less pleasure” seems false. At the very least, it’s really controversial (that’s why it’s not part of the the uncontroversial claim, where it just says “pleasure is always unobjectionable.”)
When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
By contrast, “all else equal, more suffering is always worse than less suffering” seems to check out – that’s part of the uncontroversial interpretation of “suffering is bad.”
So, “more suffering is always worse” is uncontroversial, while “more intensity of positive experience is always better (in a sense that matters morally and is worth tradeoffs)” is controversial.
That’s why I said the following earlier on in my comment here:
But “maximize hedons” isn’t.
The point to notice for proponents of an exclusively suffering-focused axiology is that humans have two motivational systems, not just the system-1 motivation that I see as being largely about the prevention of short-term cravings/suffering. Next to that, there’s also also higher-order, “reflective” desires. These reflective desires are often (though not in everyone) about (specific forms of) happiness or things other than experiences (or, as a perhaps better way to express this, they are also about how specific experiences are embedded in the world, their contact with reality.)
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA). But I think that function is extremely complex, dependent on one’s entire lifetime, and not simply reducible to basic proxies like pleasure or pain.
I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.
That seems like a misunderstanding – I didn’t mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it’s a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
Isn’t this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it’s better for you to feel like you’re doing more good than to actually do good. It’s better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it’s all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist “preference-affecting” views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn’t think that there’s a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there’s no experiment that you could ever possibly do to distinguish between two situations, then I don’t think that those two situations should be morally distinct.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you’re very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word “pizza” is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they’re not conscious. You just won’t, because you’ll never question it. If value is totally subjective and the accuracy of beliefs doesn’t matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it’s better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn’t find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don’t find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a “meaning” shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it’s apparently in their own best interests, but I think those don’t capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart’s law to me, like hedonism might (?) seem like an example of Goodhart’s law to you.
I don’t think people and their values are in general replaceable, and if they don’t want to be manipulated, it’s worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don’t think it’s necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn’t be worse. It’s the fact that people do specifically care about things beyond just the contents of their experiences. If they didn’t, and also didn’t care about being manipulated, then it seems like it wouldn’t necessarily be bad to manipulate them.