Thanks for the tips! Adding some a brief primer on virtue ethics and consequentialism is a good idea, and I think you’re right that this whole idea is more relevant to the social sciences than philosophy. Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category? Great distinction at any rate, I’ll go change that word “philosophical” to “intellectual” now.
I think you noticed, or at least, you’ve now led me to notice, that I’m not really interested in the “in theory” at all, or in struggling over definitions. I’m just trying to show that what is actually happening “in practice” and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn’t change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness. I think what I’m trying to do with this article is help figure out where we should draw a boundary.
b) terminal goals are completely arbitrary. Ie. you can’t say that killing people is a bad terminal goal. You can only say that “killing people is bad if… you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn’t help us pick our ends.
I think this might have been my whole point, that our real ends aren’t as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue. Nothing else seems like an end-in-itself. Killing people can be an instrumental goal that someone consciously or subconsciously thinks will make him happy, that will lead him to his optimal mind-state. He might be wrong about this; it might not actually lead him to his optimal mind-state. Or maybe it does. Either way, it doesn’t matter in the context of this discussion whether we classify killing as “wrong” or not, it matters what we do about it. In the real world, we’re motivated, by our own desires for personal happiness and goodness, to lock up killers.
Very important point: If you’re claiming that doing so is rational, then one of two things must be the case:
But I’m not claiming it’s rational… I’m not claiming anything, and I’m not arguing anything or proving any point. I’m just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.
The main takeaway I’m getting from your advice is that I should try to make it clear in this article that I’m not attempting to prove a point, but rather just to “carve along the joints” and offer a clearer way of looking at things by lumping happiness and goodness into the same category.
Perhaps one other way we could describe what is actually happening in practice would be to say that virtue ethicists pursue their terminal values more subconsciously while consequentialists pursue the same terminal values more consciously.
Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category?
The latter.
I think you noticed, or at least, you’ve now led me to notice, that I’m not really interested in the “in theory” at all, or in struggling over definitions.
I didn’t know you weren’t interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you’re exaggerating in saying that you don’t really care about it at all.
I’m just trying to show that what is actually happening “in practice” and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn’t change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness.
Well said. In your article, I think that some of the language implies otherwise, but I don’t like talking about semantics either and I think the important point is that this is clear now.
The other important point is that I’ve screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I’ve partially failed in this instance, and I think that all the information was there for me to succeed.
I think this might have been my whole point, that our real ends aren’t as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue.
I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, and you can’t say that they’re inherently good/bad. But they aren’t arbitrary in the sense that what actually drives us isn’t arbitrary at all. Agree?
But I’m not claiming it’s rational… I’m not claiming anything, and I’m not arguing anything or proving any point. I’m just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.
Let me try to rephrase this to see if I understood and agree: “People who seem very rational seem to act in ways that don’t maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it’s more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.”
I didn’t know you weren’t interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you’re exaggerating in saying that you don’t really care about it at all.
Hah, and I thought I was literal. I guess I’m interested in knowing the “in theory” just so I can make connections (like adherents to different moral systems have different tendencies in terms of making decisions consciously vs. subconsciously) to the “in practice”
The other important point is that I’ve screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I’ve partially failed in this instance, and I think that all the information was there for me to succeed.
But at the same time, you’ve really helped me figure out my point, which wouldn’t have happened if you said “nice article, I get what you’re saying here.” In regular life conversations, it’s better to just think about what someone meant and reply to that, but for an article like this, it was totally worthwhile for you to reply to what I actually said and share what you thought it implied.
I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, [and you can’t say that they’re inherently good/bad.] But they aren’t arbitrary in the sense that what actually drives us isn’t arbitrary at all. Agree?
The bracketed part I don’t care about. Discussing “inherently good/bad” seems like a philosophical debate that hinges on our ideas of “inherent.” The rest, I agree :) We seem to choose which actions to take arbitrarily, and through those actions we seemingly arbitrarily position ourselves somewhere on the happiness-goodness continuum.
Let me try to rephrase this to see if I understood and agree: “People who seem very rational seem to act in ways that don’t maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it’s more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.
Thanks for the tips! Adding some a brief primer on virtue ethics and consequentialism is a good idea, and I think you’re right that this whole idea is more relevant to the social sciences than philosophy. Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category? Great distinction at any rate, I’ll go change that word “philosophical” to “intellectual” now.
I think you noticed, or at least, you’ve now led me to notice, that I’m not really interested in the “in theory” at all, or in struggling over definitions. I’m just trying to show that what is actually happening “in practice” and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn’t change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness. I think what I’m trying to do with this article is help figure out where we should draw a boundary.
I think this might have been my whole point, that our real ends aren’t as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue. Nothing else seems like an end-in-itself. Killing people can be an instrumental goal that someone consciously or subconsciously thinks will make him happy, that will lead him to his optimal mind-state. He might be wrong about this; it might not actually lead him to his optimal mind-state. Or maybe it does. Either way, it doesn’t matter in the context of this discussion whether we classify killing as “wrong” or not, it matters what we do about it. In the real world, we’re motivated, by our own desires for personal happiness and goodness, to lock up killers.
But I’m not claiming it’s rational… I’m not claiming anything, and I’m not arguing anything or proving any point. I’m just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.
The main takeaway I’m getting from your advice is that I should try to make it clear in this article that I’m not attempting to prove a point, but rather just to “carve along the joints” and offer a clearer way of looking at things by lumping happiness and goodness into the same category.
Perhaps one other way we could describe what is actually happening in practice would be to say that virtue ethicists pursue their terminal values more subconsciously while consequentialists pursue the same terminal values more consciously.
The latter.
I didn’t know you weren’t interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you’re exaggerating in saying that you don’t really care about it at all.
Well said. In your article, I think that some of the language implies otherwise, but I don’t like talking about semantics either and I think the important point is that this is clear now.
The other important point is that I’ve screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I’ve partially failed in this instance, and I think that all the information was there for me to succeed.
I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, and you can’t say that they’re inherently good/bad. But they aren’t arbitrary in the sense that what actually drives us isn’t arbitrary at all. Agree?
Let me try to rephrase this to see if I understood and agree: “People who seem very rational seem to act in ways that don’t maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it’s more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.”
Hah, and I thought I was literal. I guess I’m interested in knowing the “in theory” just so I can make connections (like adherents to different moral systems have different tendencies in terms of making decisions consciously vs. subconsciously) to the “in practice”
But at the same time, you’ve really helped me figure out my point, which wouldn’t have happened if you said “nice article, I get what you’re saying here.” In regular life conversations, it’s better to just think about what someone meant and reply to that, but for an article like this, it was totally worthwhile for you to reply to what I actually said and share what you thought it implied.
The bracketed part I don’t care about. Discussing “inherently good/bad” seems like a philosophical debate that hinges on our ideas of “inherent.” The rest, I agree :) We seem to choose which actions to take arbitrarily, and through those actions we seemingly arbitrarily position ourselves somewhere on the happiness-goodness continuum.
Great wording! May I plagiarize?