Here’s something I’ve linked to a number of times on LessWrong, but as a newcomer (I think, despite the lack of a green shoot by your name) you will likely not have seen it before. I wrote it, inspired by a lot of stuff I’ve seen. It approaches the topic of the post in a very different manner. I’d be interested in what answer you would make to Insanity Wolf. Beware: here be dragons.
I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table—the first step in the chain just looks so much obviously better than the starting point—but I’m willing to make that sacrifice.
Scott doesn’t apply this to EA, but if you start from having to save a child, and the end point is “you have to sacrifice a lot to save as many children as you can”, this seems relevant.
I was scrolling for a while, assuming I’d neared the end, only to look at the position of the scrollbar and find I was barely 5% through! This must have taken a fair bit of effort. I really like the helpful page and I’m glad I know about it, I encourage you to make a linkpost for it sometime if you haven’t already.
I have another thousand or so of these, which I may just dump on a second page, unsorted, called The Gospel According to Insanity Wolf. That’s not counting the ones that I’ve decided are too extreme to publish at all. All drawn from life.
I was scrolling for a while, assuming I’d neared the end, only to look at the position of the scrollbar and find I was barely 5% through!
And there’s a meme for that too! The last in the Altruism section.
IS THIS NEARLY OVER YET?
LOOK AT THE SCROLL BAR!
YOU’VE HARDLY STARTED!
I’m sure there’s an argument to be made in defence of supererogation. I’ve never seen it though. People say “but demandingness” and Chad Singer replies Yes. My own faith in the boundedness of duty in both magnitude and distance is sufficient to not take even one step onto the slippery path that leads down only to the altruism event horizon that rips souls apart.
The next time you are asked for some money by a homeless person, I encourage you to seriously consider how that money of yours would otherwise be spent, and ask yourself the following question: is it really plausible that this money will go towards something worthwhile enough for me to keep it for myself?
Given that giving the money to the homeless person would be actively bad, both personally and socially, and that I do not otherwise have a habit of spending money on things that harm myself and others, I can confidently say that I can’t think of a single thing I have ever spent money on for which the answer to your question would be “no”.
In light of this, let us modify Singer’s argument:
If you have to save people at a “relatively small cost to yourself”, and you allow for a cumulative amount of small costs to add up to a large cost, you run into a heap paradox. You’re going to have to pick some point at which the cumulative small costs make up a large one, even though the costs before and after the threshold don’t differ much. Of course, this also means that you can pick two children as your threshold.
(It also raises the question “if your society already redistributes your money to children, have you passed the threshold just from that?” Your taxes may be enough.)
I think this ‘heap paradox’ quite often comes up in philosophy (an example coming to my mind is the distinction between the foetus that is entitled to moral consideration and the one that is not—there seems to be a point or points in foetal development after which the foetus gains ‘moral status’ or something like this). It is true that, where this problem can be avoided, the theory/explanation which does is more desirable in light of this, so this is a good point to make.
In Theron Pummer’s recent book ‘The Rules of Rescue,’ he gives a few thought experiments to motivate the intuition that there is a point at which costs to oneself do not suffice to out-weigh the moral significance of the plight/likely death of someone else, and yet a point after which these do suffice.
I think my main objection is one of disanalogy, but doesn’t quite fit into those dimensions.
There’s a bait-and-switch from “our intuition is that it’s nearly required to sacrifice a suit to save a child” to “it’s some sort of counterintuitive mandate to radically change our way of life”. Intuition isn’t transitive and analogies are extremely lossy—if it justifies A, and A is in some ways similar to B, it doesn’t necessary justify B.
In truth, there is no legal duty and in modern societies probably no socially-enforced duty to save the child if you don’t want to. I personally probably would, but I’d not shun someone who thought it too risky and costly.
I don’t know if Singer has himself answered Timmerman (I searched but didn’t find anything), but based on what I have read of Singer, wouldn’t his answer be that yes, Lisa must go on saving drowning children? That she must attend to her bank balance only so far as necessary to maximise the children saved, and ignore such frivolities as the theatre? That’s what it comes down to, for Singer: using one’s resources to do all the good you can do.
You are probably right that Singer would bite the bullet and say that Unlucky Lisa is not permitted to go to the theatre (even once). This is another thing, then, that I think Singer gets wrong (as well as—as stated in the essay—that PPBO is true/necessary for the argument of FAM).
Despite disagreeing with Singer on these important points, I still see myself as defending him and his project. After all, Singer didn’t simply say ‘Utilitarianism is true; therefore, we ought to be doing more than we are to help those suffering and dying from a lack of food, shelter, and medical care.’ Doing applies/practical ethics isn’t (or at least shouldn’t be, in my opinion) like this. The best arguments in practical ethics will try as much as they can to rely only on premises almost anyone would accept, or at least which people with different background convictions and beliefs regarding normative ethical theory could accept.
Here’s something I’ve linked to a number of times on LessWrong, but as a newcomer (I think, despite the lack of a green shoot by your name) you will likely not have seen it before. I wrote it, inspired by a lot of stuff I’ve seen. It approaches the topic of the post in a very different manner. I’d be interested in what answer you would make to Insanity Wolf. Beware: here be dragons.
Along these lines, Scott can be quoted:
Scott doesn’t apply this to EA, but if you start from having to save a child, and the end point is “you have to sacrifice a lot to save as many children as you can”, this seems relevant.
I was scrolling for a while, assuming I’d neared the end, only to look at the position of the scrollbar and find I was barely 5% through! This must have taken a fair bit of effort. I really like the helpful page and I’m glad I know about it, I encourage you to make a linkpost for it sometime if you haven’t already.
A labour of love. Or something. :)
I have another thousand or so of these, which I may just dump on a second page, unsorted, called The Gospel According to Insanity Wolf. That’s not counting the ones that I’ve decided are too extreme to publish at all. All drawn from life.
And there’s a meme for that too! The last in the Altruism section.
IS THIS NEARLY OVER YET?
LOOK AT THE SCROLL BAR!
YOU’VE HARDLY STARTED!
I’m sure there’s an argument to be made in defence of supererogation. I’ve never seen it though. People say “but demandingness” and Chad Singer replies Yes. My own faith in the boundedness of duty in both magnitude and distance is sufficient to not take even one step onto the slippery path that leads down only to the altruism event horizon that rips souls apart.
Given that giving the money to the homeless person would be actively bad, both personally and socially, and that I do not otherwise have a habit of spending money on things that harm myself and others, I can confidently say that I can’t think of a single thing I have ever spent money on for which the answer to your question would be “no”.
If you have to save people at a “relatively small cost to yourself”, and you allow for a cumulative amount of small costs to add up to a large cost, you run into a heap paradox. You’re going to have to pick some point at which the cumulative small costs make up a large one, even though the costs before and after the threshold don’t differ much. Of course, this also means that you can pick two children as your threshold.
(It also raises the question “if your society already redistributes your money to children, have you passed the threshold just from that?” Your taxes may be enough.)
Thank you for your comment.
I think this ‘heap paradox’ quite often comes up in philosophy (an example coming to my mind is the distinction between the foetus that is entitled to moral consideration and the one that is not—there seems to be a point or points in foetal development after which the foetus gains ‘moral status’ or something like this). It is true that, where this problem can be avoided, the theory/explanation which does is more desirable in light of this, so this is a good point to make.
In Theron Pummer’s recent book ‘The Rules of Rescue,’ he gives a few thought experiments to motivate the intuition that there is a point at which costs to oneself do not suffice to out-weigh the moral significance of the plight/likely death of someone else, and yet a point after which these do suffice.
I think my main objection is one of disanalogy, but doesn’t quite fit into those dimensions.
There’s a bait-and-switch from “our intuition is that it’s nearly required to sacrifice a suit to save a child” to “it’s some sort of counterintuitive mandate to radically change our way of life”. Intuition isn’t transitive and analogies are extremely lossy—if it justifies A, and A is in some ways similar to B, it doesn’t necessary justify B.
In truth, there is no legal duty and in modern societies probably no socially-enforced duty to save the child if you don’t want to. I personally probably would, but I’d not shun someone who thought it too risky and costly.
Thank you for your comment
I don’t know if Singer has himself answered Timmerman (I searched but didn’t find anything), but based on what I have read of Singer, wouldn’t his answer be that yes, Lisa must go on saving drowning children? That she must attend to her bank balance only so far as necessary to maximise the children saved, and ignore such frivolities as the theatre? That’s what it comes down to, for Singer: using one’s resources to do all the good you can do.
You are probably right that Singer would bite the bullet and say that Unlucky Lisa is not permitted to go to the theatre (even once). This is another thing, then, that I think Singer gets wrong (as well as—as stated in the essay—that PPBO is true/necessary for the argument of FAM).
Despite disagreeing with Singer on these important points, I still see myself as defending him and his project. After all, Singer didn’t simply say ‘Utilitarianism is true; therefore, we ought to be doing more than we are to help those suffering and dying from a lack of food, shelter, and medical care.’ Doing applies/practical ethics isn’t (or at least shouldn’t be, in my opinion) like this. The best arguments in practical ethics will try as much as they can to rely only on premises almost anyone would accept, or at least which people with different background convictions and beliefs regarding normative ethical theory could accept.