I am a finalist in undergraduate philosophy at the University of Cambridge, UK. Find my linked in here, and my blog here.
Jacob1
Using Older AI Models as a Form of Boycott
Thank you for your comment
Thank you for your comment.
I think this ‘heap paradox’ quite often comes up in philosophy (an example coming to my mind is the distinction between the foetus that is entitled to moral consideration and the one that is not—there seems to be a point or points in foetal development after which the foetus gains ‘moral status’ or something like this). It is true that, where this problem can be avoided, the theory/explanation which does is more desirable in light of this, so this is a good point to make.
In Theron Pummer’s recent book ‘The Rules of Rescue,’ he gives a few thought experiments to motivate the intuition that there is a point at which costs to oneself do not suffice to out-weigh the moral significance of the plight/likely death of someone else, and yet a point after which these do suffice.
You are probably right that Singer would bite the bullet and say that Unlucky Lisa is not permitted to go to the theatre (even once). This is another thing, then, that I think Singer gets wrong (as well as—as stated in the essay—that PPBO is true/necessary for the argument of FAM).
Despite disagreeing with Singer on these important points, I still see myself as defending him and his project. After all, Singer didn’t simply say ‘Utilitarianism is true; therefore, we ought to be doing more than we are to help those suffering and dying from a lack of food, shelter, and medical care.’ Doing applies/practical ethics isn’t (or at least shouldn’t be, in my opinion) like this. The best arguments in practical ethics will try as much as they can to rely only on premises almost anyone would accept, or at least which people with different background convictions and beliefs regarding normative ethical theory could accept.
Yes, these are the self-regarding reasons I imagined you had in mind. My point stands, however, that the behaviour is at least seemingly other-regarding, and it is still action to which the term ‘moral’ appropriately applies. The kinds of things you are surmising about here are for the realm of meta-ethics and moral psychology; not normative and applied ethics. It might well be that I am only motivated by self-interest to act seemingly morally in accordance with consistency (crudely, that ‘egoism’ is true), but this says nothing as to what this moral system or what consistency requires.
It does not seem to me that the reasons to save the drowning child could be ‘personal’ or self-regarding, and even if they could, they would be such that from them follow other imperatives that are at least seemingly other-regarding and on which the term ‘moral’ would be, I think, appropriate.
As for the scaling objection, it is a good one and one that has appeared in the comment section of my link-post on the EA forum. I will say here what I did there: that it seems very counter-intuitive to me to suppose there are no ‘rights’ and ‘wrongs’ and only things that are ‘better’ and ‘worse’, and that even if this is true, it would be useful to sometimes suppose the former exist, and distinguish between actions which come in the former category and those which come in the latter.
I guess you could interpret Timmerman as consistent with Singer, but I personally think that he is trying to provide justification for behaviour that it entirely self-regarding and to the extent that it is superfluous to what is required for maximal output.
Yes
I haven’t yet.
Can you?
Thanks for the comment! I’m hoping to get some more feedback on this overtime, as there are some more technical questions in my mind as to how to actually pull this off, as well as the theoretical questions relating to whether this would be a good strategy, or whether it would be counter-productive! :)