Do our moral intuitions stem from a consequentialist goal to save all lives that can be saved? Or do they stem from an obligation to maintain a healthy, caring, and more-or-less self-sufficient society?
If the question is just âWhatâs the ultimate psychological cause of my moral intuitions in these cases?â, then đ¤ˇ.
If the question is âAre we just faking caring about saving other lives, when really we donât care about other human beingsâ welfare, autonomy, or survival at all?â, then I feel confident saying âNahâ.
I get a sense from this question (and from Romeoâs content) of âcorrectly noticing that EA has made some serious missteps here, but then swinging the pendulum too far in the other directionâ. Or maybe it just feels to me like this is giving surprisingly wrong/âincomplete pictures of most peopleâs motivations.
Quoth Romeo:
Of course distance in time, space, or inference create uncertainty. Of course uncertainty reduces expected value and possibly even brings the sign of the action into question if the expected variance is high enough.
For most people I suspect the demandingness is the crux, rather than the uncertainty. I think theyâd resist the argument even if the local âsave a drowning childâ intervention seemed more uncertain than the GiveWell-ish intervention. (Partly because of a âdonât let yourself get muggedâ instinct, partly because of the integrity/âparts thing, and partly because of scope insensitivity.)
I also think thereâs a big factor of âI just donât care as much about people far away from me, their inner lives feel less salient to meâ and/âor âI wonât be held similarly blameworthy if I ignore large amounts of distant suffering as if I ignore even small amounts of nearby suffering, because the people who could socially punish me are also located near meâ.
Undemanding + Near: Drowning child. Thereâs a cost to saving the child, but because this scenario is rare, one-off, local, and not too costly, almost everyone (pace Keltham) is happy to endorse saving the child here.
Undemanding + Far: The same dilemma, except youâre (say) missing a medium-importance business call (with cost equivalent to one fancy suit) in order to give someone directions over the phone that will enable them to save a drowning child in a foreign country.
I suspect most people would endorse doing the same thing in these two cases, at least given a side-by-side comparison.
Demanding + Near: E.g., fighting in the trenches in a just war; or youâre living in WW2 Germany and have a unique opportunity to help thousands of Jews escape the country, at the risk of being caught and executed.
Demanding + Far: AMF, GiveDirectly, etc.
Here, my guess is that a lot more people will see overriding moral urgency and value in âDemanding + Nearâ than in âDemanding + Farâ. When bodies are flying all around you, or you are directly living through your own community experiencing an atrocity, I expect that to be parsed as a very different moral category than âthereâs an atrocity happening in a distant country and I could donate all my time and money to reducing the death tollâ.
Could you clarify what you mean by âdemandingnessâ? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?
The point of the original drowning child argument was to argue for âgive basically everything you have to help people in dire need in the developing worldâ. So the point of the original argument was to move from
A relatively Undemanding + Near scenario: You encounter a child drowning in the real world. This is relatively undemanding because itâs a rare, once-off event that only costs you the shirt on your back plus a few minutes of your time. You arenât risking your life, giving away all your wealth, spending your whole life working on the problem, etc.
to
A relatively Demanding + Far scenario. It doesnât have to be AMF or GiveDirectly, but I use those as examples. (Also, obviously, you can give to those orgs without endorsing âgive everything you haveâ. Theyâre just stand-ins here.)
Equally importantly IMO, it argues for transfer from a context where the effect of your actions is directly perceptionally obvious to one where it is unclear and filters through political structures (e.g., aid organizations and what they choose to do and to communicate; any governments they might be interacting with; any other players on the ground in the distant country) that will be hard to model accurately.
My guess is that this has a relatively small effect on most peopleâs moral intuitions (though maybe it should have a larger effectâI donât think I grok the implicit concern here). Iâd be curious if thereâs research bearing on this, and on the other speculations I tossed out there. (Or maybe Spencer or someone can go test it.)
Quoth AllAmericanBreakfast:
If the question is just âWhatâs the ultimate psychological cause of my moral intuitions in these cases?â, then đ¤ˇ.
If the question is âAre we just faking caring about saving other lives, when really we donât care about other human beingsâ welfare, autonomy, or survival at all?â, then I feel confident saying âNahâ.
I get a sense from this question (and from Romeoâs content) of âcorrectly noticing that EA has made some serious missteps here, but then swinging the pendulum too far in the other directionâ. Or maybe it just feels to me like this is giving surprisingly wrong/âincomplete pictures of most peopleâs motivations.
Quoth Romeo:
For most people I suspect the demandingness is the crux, rather than the uncertainty. I think theyâd resist the argument even if the local âsave a drowning childâ intervention seemed more uncertain than the GiveWell-ish intervention. (Partly because of a âdonât let yourself get muggedâ instinct, partly because of the integrity/âparts thing, and partly because of scope insensitivity.)
I also think thereâs a big factor of âI just donât care as much about people far away from me, their inner lives feel less salient to meâ and/âor âI wonât be held similarly blameworthy if I ignore large amounts of distant suffering as if I ignore even small amounts of nearby suffering, because the people who could socially punish me are also located near meâ.
We can consider a 2x2 matrix:
NearFarUndemandingDrowning ChildDrowning Child Phone Call?DemandingIn a War Zone?Against Malaria Foundation
Undemanding + Near: Drowning child. Thereâs a cost to saving the child, but because this scenario is rare, one-off, local, and not too costly, almost everyone (pace Keltham) is happy to endorse saving the child here.
Undemanding + Far: The same dilemma, except youâre (say) missing a medium-importance business call (with cost equivalent to one fancy suit) in order to give someone directions over the phone that will enable them to save a drowning child in a foreign country.
I suspect most people would endorse doing the same thing in these two cases, at least given a side-by-side comparison.
Demanding + Near: E.g., fighting in the trenches in a just war; or youâre living in WW2 Germany and have a unique opportunity to help thousands of Jews escape the country, at the risk of being caught and executed.
Demanding + Far: AMF, GiveDirectly, etc.
Here, my guess is that a lot more people will see overriding moral urgency and value in âDemanding + Nearâ than in âDemanding + Farâ. When bodies are flying all around you, or you are directly living through your own community experiencing an atrocity, I expect that to be parsed as a very different moral category than âthereâs an atrocity happening in a distant country and I could donate all my time and money to reducing the death tollâ.
I also think of the demandingness as generating an additional uncertainty term in the straussian sense.
Could you clarify what you mean by âdemandingnessâ? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?
The point of the original drowning child argument was to argue for âgive basically everything you have to help people in dire need in the developing worldâ. So the point of the original argument was to move from
A relatively Undemanding + Near scenario: You encounter a child drowning in the real world. This is relatively undemanding because itâs a rare, once-off event that only costs you the shirt on your back plus a few minutes of your time. You arenât risking your life, giving away all your wealth, spending your whole life working on the problem, etc.
to
A relatively Demanding + Far scenario. It doesnât have to be AMF or GiveDirectly, but I use those as examples. (Also, obviously, you can give to those orgs without endorsing âgive everything you haveâ. Theyâre just stand-ins here.)
Equally importantly IMO, it argues for transfer from a context where the effect of your actions is directly perceptionally obvious to one where it is unclear and filters through political structures (e.g., aid organizations and what they choose to do and to communicate; any governments they might be interacting with; any other players on the ground in the distant country) that will be hard to model accurately.
My guess is that this has a relatively small effect on most peopleâs moral intuitions (though maybe it should have a larger effectâI donât think I grok the implicit concern here). Iâd be curious if thereâs research bearing on this, and on the other speculations I tossed out there. (Or maybe Spencer or someone can go test it.)
I see. So essentially demandingness is not about how strong the demand is but about how much is being demanded?