I upvoted this post, and I want to qualify that upvote. I upvoted this post because I believe it raises a substantial point, but I feel like it doesn’t have enough, for lack of a better term, punch, to it. Part of my lack of conviction is based in how I’m not very well-educated on the manners of moral psychology, or philosophy, either, and I suspect this would be cleared up if I were to study it more. Shminux, you might not recognize me, but I’m Evan from the meetup. Anyway, I remember at the last meetup we both attended a couple of weeks ago when we discussed the Politics is the Mind-Killer sub-sequence, you mentioned how most people who call themselves “consequentialists” like Andrew are actually just “aspiring consequentialists”, because, while they may be signaling their allegiance to an ethical philosophy they find more appealing than the default norms for humans, they’re still constrained by the quick heuristics imposed upon us by our genetic histories, which better conform to a less formal ethical philosophy, like virtue ethics. I believe that was a decent point you made, whether or not that same train of thought was the one that inspired this post.
For the record, I too can find a a grain of virtue ethics in my own moral decisions.
My naive hypothesis would be that people who identify as consequentialists, and who have read, and internalized, a consequentialist mindset, would be more able to think along consequentialist lines than someone who isn’t as well-versed in literary philosophy. I wouldn’t be surprised if this hypothesis was easily falsified by experimental philosophy, though. I wouldn’t be surprised either if humans who self-identify as consequentialists also go with their intuitions about what actions will increase, or decrease, whatever they call ‘utility’, rather than going through an explicit and rigorous cost-benefit calculation. In this sense, I don’t perceive people who self-identify as utilitarian, or consequentialist, as more significant, or worthy of our attention, the closer their actions naively fit the mold of however the layperson of ethical philosophy would act.
For the record, I too can find a a grain of virtue ethics in my own moral decisions.
First, I do not think that there is anything wrong with virtue ethics as long as we recognize that it is one of several robust computational shortcuts, and not the one true normative ethics. It is quite rational to use all the tools in your disposal. It is irrational for a human to proclaim oneself to be a consequentialist, because no one is. A form of consequentialism is essential for FAI, since virtue- or rule-based shortcuts are bound to fail on the edge cases, and an AI is very good at finding edge cases. Humans, on the other hand, extremely rarely run into these edge cases, such as the trolley problem or specks vs torture. More common are paradigm shifts, such as universal suffrage, gay rights, abortion, euthanasia, ethical treatment of animals, where some deontological rules have to be carefully recalculated, then followed again. Some day it might be sims, uploads, cloning, designer babies, and so on.
I wouldn’t be surprised either if humans who self-identify as consequentialists also go with their intuitions about what actions will increase, or decrease, whatever they call ‘utility’, rather than going through an explicit and rigorous cost-benefit calculation.
I would estimate this to be much likelier than them being honest-to-goodness consequentialists.
In this sense, I don’t perceive people who self-identify as utilitarian, or consequentialist, as more significant, or worthy of our attention
If someone says “I don’t just follow my intuition but also attempt to calculate utilities the best I can before making a decision”, then it is worthy of respect. If someone says “I base my actions solely on their evaluated consequences”, I would lower my opinion of them because of this self-delusion.
Thanks for replying You made more points which dovetail with my own observations. I’d qualify (again) my previous comment as not an endorsement of virtue ethics generally, but an acknowledgement that it can be valuable. I might consider a form of consequentialism to be better than any other system we have right now for an ideal rational agent, but I don’t believe that humans in their current state will reach the best results they could achieve by pretending to be consequentialists. I don’t know how humans will fare in their ethical behavior in a future where our mind-brains are modified.
More common are paradigm shifts, such as universal suffrage, gay rights, abortion, euthanasia, ethical treatment of animals, where some deontological rules have to be carefully recalculated, then followed again.
I don’t believe anything resembling careful recalculation occurred with any of these shifts.
I upvoted this post, and I want to qualify that upvote. I upvoted this post because I believe it raises a substantial point, but I feel like it doesn’t have enough, for lack of a better term, punch, to it. Part of my lack of conviction is based in how I’m not very well-educated on the manners of moral psychology, or philosophy, either, and I suspect this would be cleared up if I were to study it more. Shminux, you might not recognize me, but I’m Evan from the meetup. Anyway, I remember at the last meetup we both attended a couple of weeks ago when we discussed the Politics is the Mind-Killer sub-sequence, you mentioned how most people who call themselves “consequentialists” like Andrew are actually just “aspiring consequentialists”, because, while they may be signaling their allegiance to an ethical philosophy they find more appealing than the default norms for humans, they’re still constrained by the quick heuristics imposed upon us by our genetic histories, which better conform to a less formal ethical philosophy, like virtue ethics. I believe that was a decent point you made, whether or not that same train of thought was the one that inspired this post.
For the record, I too can find a a grain of virtue ethics in my own moral decisions.
My naive hypothesis would be that people who identify as consequentialists, and who have read, and internalized, a consequentialist mindset, would be more able to think along consequentialist lines than someone who isn’t as well-versed in literary philosophy. I wouldn’t be surprised if this hypothesis was easily falsified by experimental philosophy, though. I wouldn’t be surprised either if humans who self-identify as consequentialists also go with their intuitions about what actions will increase, or decrease, whatever they call ‘utility’, rather than going through an explicit and rigorous cost-benefit calculation. In this sense, I don’t perceive people who self-identify as utilitarian, or consequentialist, as more significant, or worthy of our attention, the closer their actions naively fit the mold of however the layperson of ethical philosophy would act.
First, I do not think that there is anything wrong with virtue ethics as long as we recognize that it is one of several robust computational shortcuts, and not the one true normative ethics. It is quite rational to use all the tools in your disposal. It is irrational for a human to proclaim oneself to be a consequentialist, because no one is. A form of consequentialism is essential for FAI, since virtue- or rule-based shortcuts are bound to fail on the edge cases, and an AI is very good at finding edge cases. Humans, on the other hand, extremely rarely run into these edge cases, such as the trolley problem or specks vs torture. More common are paradigm shifts, such as universal suffrage, gay rights, abortion, euthanasia, ethical treatment of animals, where some deontological rules have to be carefully recalculated, then followed again. Some day it might be sims, uploads, cloning, designer babies, and so on.
I would estimate this to be much likelier than them being honest-to-goodness consequentialists.
If someone says “I don’t just follow my intuition but also attempt to calculate utilities the best I can before making a decision”, then it is worthy of respect. If someone says “I base my actions solely on their evaluated consequences”, I would lower my opinion of them because of this self-delusion.
Thanks for replying You made more points which dovetail with my own observations. I’d qualify (again) my previous comment as not an endorsement of virtue ethics generally, but an acknowledgement that it can be valuable. I might consider a form of consequentialism to be better than any other system we have right now for an ideal rational agent, but I don’t believe that humans in their current state will reach the best results they could achieve by pretending to be consequentialists. I don’t know how humans will fare in their ethical behavior in a future where our mind-brains are modified.
I don’t believe anything resembling careful recalculation occurred with any of these shifts.