Then a mother jumped in front of a car to save her child.
I think that this is a prototypical example in two ways:
1) Descriptive ethics. Describing what people think is right/good/moral. (Actually, I don’t think that this is strictly true, but whatever.)
2) Describing how people actually act (cultural anthropology?).
Your main point in this article seems to be related to 2). “People don’t only try to seek pleasure.”
a) Was that your main point?
b) Do regular people debate this (I’m pretty sure they do, but I’m not positive)? Philosophers? Rationalists? My impression is that rationalists don’t debate this, and so I’m not sure who this post is targeting (you did say it’s a repost from your blog, so maybe there is indeed a different target audience?).
c) Does this have any implications for what you “should” do? My working conclusion is that “should requires an axiom”. That terminal values are arbitrary, and you could only say that you “should” do something to the extent that it leads to a chosen terminal value (or blend of terminal values).
(If this post is only about 2), then the following is tangential, and perhaps isn’t the right place for this. But anyway...)
I really don’t find “terminal values are arbitrary” to be a comfortable conclusion. I’m not exactly sure why I find it to be so uncomfortable.
Like most people, I seek “purpose”. Some sort of absolute feeling that the goal I’m pursuing is “the right goal”. In other words, I have a desire to find and pursue the “right goal”, even though I think/understand that terminal values are arbitrary.
Intellectually/logically, I think I have a good understanding of the ideas of consequentialism. I think I understand the reasoning behind the idea that terminal values are arbitrary. But maybe there are holes in my understanding that are causing the discomfort. Or better yet, maybe my conclusion that terminal values are arbitrary is wrong.
I’m not sure what my terminal values are. But there’s a very large part of me that only cares about my own pleasure (fortunately, acting altruistically brings me a good amount of pleasure). But the implications of that are pretty scary. For example, someone who legitimately only cares about his own pleasure would chose to kill everyone in the world if it meant that he’d survive and be happy. Logically, I don’t see a problem here.
Consider the question of “What will lead to your terminal goal?”. In the hypothetical, this is already answered for us.
Consider the question of “Well, is that a good terminal goal?”. Logically, it seems to me that there’s no such thing as a “good terminal goal”.
But emotionally, I feel like there’s a huge problem here. Unfortunately, when I examine this feeling, I find that there isn’t good reason behind it. If you’re trying to achieve your terminal goals, then a good reason for an emotion is because it helps you achieve your terminal goals. If the emotion isn’t helping you achieve your goals, then I’d say that there isn’t good reason behind it. It seems to me that these emotions aren’t helping to achieve the terminal goal of personal happiness, and that these emotions are the result of an imperfect brain.
I suppose you could argue that those sorts of emotions help you function in society and be happy. That the consequences of such feelings of guilt are decreased likelihood of being shunned and an increased likelihood of being accepted, both of which make it more likely that you survive and live happily. But that doesn’t seem to be the case with my feeling guilty for (possibly/hypothetically) being selfish. If I didn’t feel this guilt, I highly doubt anyone would know.
I suppose that you could subsequently argue that our brains aren’t designed for this. That we don’t get to be altruistic in the vast majority of circumstances (where other peoples’ utility is linked with your own short/long-run utility), but simultaneously be able to feel no guilt for choosing your own life over everyone else’s. “But why not?! Doing so seems to be the strategy that would maximize your own utility.”
I think that this is a prototypical example in two ways:
1) Descriptive ethics. Describing what people think is right/good/moral. (Actually, I don’t think that this is strictly true, but whatever.)
2) Describing how people actually act (cultural anthropology?).
Your main point in this article seems to be related to 2). “People don’t only try to seek pleasure.”
a) Was that your main point?
b) Do regular people debate this (I’m pretty sure they do, but I’m not positive)? Philosophers? Rationalists? My impression is that rationalists don’t debate this, and so I’m not sure who this post is targeting (you did say it’s a repost from your blog, so maybe there is indeed a different target audience?).
c) Does this have any implications for what you “should” do? My working conclusion is that “should requires an axiom”. That terminal values are arbitrary, and you could only say that you “should” do something to the extent that it leads to a chosen terminal value (or blend of terminal values).
(If this post is only about 2), then the following is tangential, and perhaps isn’t the right place for this. But anyway...)
I really don’t find “terminal values are arbitrary” to be a comfortable conclusion. I’m not exactly sure why I find it to be so uncomfortable.
Like most people, I seek “purpose”. Some sort of absolute feeling that the goal I’m pursuing is “the right goal”. In other words, I have a desire to find and pursue the “right goal”, even though I think/understand that terminal values are arbitrary.
Intellectually/logically, I think I have a good understanding of the ideas of consequentialism. I think I understand the reasoning behind the idea that terminal values are arbitrary. But maybe there are holes in my understanding that are causing the discomfort. Or better yet, maybe my conclusion that terminal values are arbitrary is wrong.
I’m not sure what my terminal values are. But there’s a very large part of me that only cares about my own pleasure (fortunately, acting altruistically brings me a good amount of pleasure). But the implications of that are pretty scary. For example, someone who legitimately only cares about his own pleasure would chose to kill everyone in the world if it meant that he’d survive and be happy. Logically, I don’t see a problem here.
Consider the question of “What will lead to your terminal goal?”. In the hypothetical, this is already answered for us.
Consider the question of “Well, is that a good terminal goal?”. Logically, it seems to me that there’s no such thing as a “good terminal goal”.
But emotionally, I feel like there’s a huge problem here. Unfortunately, when I examine this feeling, I find that there isn’t good reason behind it. If you’re trying to achieve your terminal goals, then a good reason for an emotion is because it helps you achieve your terminal goals. If the emotion isn’t helping you achieve your goals, then I’d say that there isn’t good reason behind it. It seems to me that these emotions aren’t helping to achieve the terminal goal of personal happiness, and that these emotions are the result of an imperfect brain.
I suppose you could argue that those sorts of emotions help you function in society and be happy. That the consequences of such feelings of guilt are decreased likelihood of being shunned and an increased likelihood of being accepted, both of which make it more likely that you survive and live happily. But that doesn’t seem to be the case with my feeling guilty for (possibly/hypothetically) being selfish. If I didn’t feel this guilt, I highly doubt anyone would know.
I suppose that you could subsequently argue that our brains aren’t designed for this. That we don’t get to be altruistic in the vast majority of circumstances (where other peoples’ utility is linked with your own short/long-run utility), but simultaneously be able to feel no guilt for choosing your own life over everyone else’s. “But why not?! Doing so seems to be the strategy that would maximize your own utility.”