Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”? Why are the two propositions argued in different ways?
This seems to be because “I want the pie” is too direct, too obvious; it violates a social convention of anti-selfishness which demands a more altruistic morality. “It is right that I should get the pie” implies some quasi-moral argument to cover the naked selfishness of the proposal in more acceptable, altruistic terms; while the truth of the matter is that either argument is a selfish one, intended merely to take possession of the pie… and the second one, by virtue of being sneakier, seems more workable then then actual honesty. An honest declaration of selfish intent can be rejected on apparent altruistic grounds, socially deemed more acceptable… while a dishonest argument based on altruistic terms cannot be rejected on selfish terms, only counter-argued with more altruistic refutation. Thus the game of apparent ‘morality’ engaged in by equally selfish agents attempting to pretend that they are not, in fact, as selfish as they are, in order to avoid the social stigma implicit in admitting to selfishness.
A more honest statement “I want the pie” needs no such justification. It’s a lot more direct and to the point, and it doesn’t need to bother with the moral sophistry a pseudo-altruistic argument would need to cloak itself in. Once it is acknowledged that both parties think they should have the pie for no reason other than that they both want it, they can then move on to settling the issue… whether by coming to some kind of agreement over the pie, or by competing against one another for it. Either case is certainly more straightforward than trying to pretend that there is a ‘right’ behind desire for pie.
When and why do people change their terminal values?
Change in terminal value is a question of an imperfect agent reevaluating their utility function. Although most individuals would find it most personally beneficial, in any given situation, to take the most selfish choice, the result of becoming known for that kind of selfishness is the loss of the tribe’s support… and perhaps even the enmity of the tribe. Hence, overt selfishness has less long-term utility, even when considered by a purely selfish utility function, than the appearance of altruism. Attempting to simulate altruism is more difficult then a degree of real altruism, so most utility functions, in actual practice, mix both concepts. Few would admit to such selfish concerns if pressed, but the truth of the matter is that they would rather have a new television than ensure that a dozen starving children they’ve never seen or heard of are fed for a year. Accordingly, their morality becomes a mix of conflicting values, with the desire to appear ‘moral’ and ‘altruistic’ competing with their own selfish desires and needs. As the balance between the two competing concepts shifts as the individual leans one way or another, their terminal values are weighted differently… which is the most common sort of change in terminal values, as the concepts are fundamental enough not to change lightly. It takes a more severe sort of reevaluation to drop or adopt a terminal value; but, being imperfect agents, humans certainly have plenty of opportunities in which to make such reevaluations.
Do the concepts of “moral error” and “moral progress” have referents?
No. “Morality” is a term used to describe the altruistic cover used to disguise people’s inherent selfishness. Presented with an example of destructive selfishness, one may say “Moral Error” in order to proclaim their altruism; just as they may say “Moral Progress” to applaud an act of apparent altruism. Neither really means anything, they’re merely social camouflage, a way of proclaiming one’s allegiance to tribal custom.
Why would anyone want to change what they want?
Because they find themselves dissatisfied with their circumstances, for any reason. As imperfect agents, humans can’t entirely conclude the end consequences of all of their desires… the best they can usually do is, on finding that those desires have led them to a place they do not want to be, desire to change those circumstances, and in realizing that their old desires were responsible for the undesired circumstances, attempt to abandon the old desires in favor of a new set of desires, which might be more useful.
Why and how does anyone ever “do something they know they shouldn’t”, or “want something they know is wrong”?
That’s easy, once the idea of morality as social cover against apparent selfishness is granted. The person in question has selfish desires, but has convinced themselves that their selfishness is wrong, and that they should be acting altruistically instead. However, as a path of pure altruism has poor overall fitness, they must perform selfish actions instead on occasion, and the more selfish those actions are, without being overtly selfish enough to incur the tribe’s displeasure, the better it is for the individual. Accordingly, any action selfish enough to seem immoral will be something a person will have reason to want, and do, despite the fact that it seems to be wrong.
Does the notion of morality-as-preference really add up to moral normality?
I do not believe that there is such a thing as a normal morality; morality is merely a set of cultural ideas devised to limit the degree to which the selfishness of individuals can harm the tribe. The specifics of morality can and do vary widely given a different set of tribal circumstances, and are not on the same level as individual preferences… but this in no way implies that morality is in any sense an absolute, determined by anything at all outside of reality. If morality is a preference, it is a group preference, shaped by those factors which are best for the tribe, not for the individuals within it.
Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”? Why are the two propositions argued in different ways? This seems to be because “I want the pie” is too direct, too obvious; it violates a social convention of anti-selfishness which demands a more altruistic morality.
People use both moralistic and altruistic claims hypocritically, but altruism and moralism aren’t the same thing. Morality is based on the idea that people deserve one thing or another (whether riches or imprisonment); what they deserve is in some sense part of the objective world: moral judgments are thought to be objective truths.
For the argument that morality is a social error that most of us would be better off were it abolished, a good source is Ian Hinckfuss, “The Moral Society.” AND my “morality series.”
Uh… did you just go through my old comments and upvote a bunch of them? If so, thanks, but… that really wasn’t necessary.
It’s almost embarrassing in the case of the above; it, like much of the other stuff that I’ve written at least one year ago, reads like an extended crazy rant.
I read some of your posts because, having agreed with you on some things, I wondered whether I would agree on others. Actually, I didn’t check the date. When I read a post I want to approve of, I don’t worry whether it’s old.
If I see a post like this one espousing moral anti-realism intelligibly, I’m apt to upvote it. Most of the posters are rather dogmatic preference utilitarians.
No worries; it’s just that here, in particular, you caught the tail end of my clumsy attempts to integrate my old Objectivist metaethics with what I’d read thus far in the Sequences. I have since reevaluated my philosophical positions… after all, tidy an explanation as it may superficially seem, I no longer believe that the human conception of morality can be entirely based on selfishness.
Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”? Why are the two propositions argued in different ways? This seems to be because “I want the pie” is too direct, too obvious; it violates a social convention of anti-selfishness which demands a more altruistic morality. “It is right that I should get the pie” implies some quasi-moral argument to cover the naked selfishness of the proposal in more acceptable, altruistic terms; while the truth of the matter is that either argument is a selfish one, intended merely to take possession of the pie… and the second one, by virtue of being sneakier, seems more workable then then actual honesty. An honest declaration of selfish intent can be rejected on apparent altruistic grounds, socially deemed more acceptable… while a dishonest argument based on altruistic terms cannot be rejected on selfish terms, only counter-argued with more altruistic refutation. Thus the game of apparent ‘morality’ engaged in by equally selfish agents attempting to pretend that they are not, in fact, as selfish as they are, in order to avoid the social stigma implicit in admitting to selfishness. A more honest statement “I want the pie” needs no such justification. It’s a lot more direct and to the point, and it doesn’t need to bother with the moral sophistry a pseudo-altruistic argument would need to cloak itself in. Once it is acknowledged that both parties think they should have the pie for no reason other than that they both want it, they can then move on to settling the issue… whether by coming to some kind of agreement over the pie, or by competing against one another for it. Either case is certainly more straightforward than trying to pretend that there is a ‘right’ behind desire for pie.
When and why do people change their terminal values? Change in terminal value is a question of an imperfect agent reevaluating their utility function. Although most individuals would find it most personally beneficial, in any given situation, to take the most selfish choice, the result of becoming known for that kind of selfishness is the loss of the tribe’s support… and perhaps even the enmity of the tribe. Hence, overt selfishness has less long-term utility, even when considered by a purely selfish utility function, than the appearance of altruism. Attempting to simulate altruism is more difficult then a degree of real altruism, so most utility functions, in actual practice, mix both concepts. Few would admit to such selfish concerns if pressed, but the truth of the matter is that they would rather have a new television than ensure that a dozen starving children they’ve never seen or heard of are fed for a year. Accordingly, their morality becomes a mix of conflicting values, with the desire to appear ‘moral’ and ‘altruistic’ competing with their own selfish desires and needs. As the balance between the two competing concepts shifts as the individual leans one way or another, their terminal values are weighted differently… which is the most common sort of change in terminal values, as the concepts are fundamental enough not to change lightly. It takes a more severe sort of reevaluation to drop or adopt a terminal value; but, being imperfect agents, humans certainly have plenty of opportunities in which to make such reevaluations.
Do the concepts of “moral error” and “moral progress” have referents? No. “Morality” is a term used to describe the altruistic cover used to disguise people’s inherent selfishness. Presented with an example of destructive selfishness, one may say “Moral Error” in order to proclaim their altruism; just as they may say “Moral Progress” to applaud an act of apparent altruism. Neither really means anything, they’re merely social camouflage, a way of proclaiming one’s allegiance to tribal custom.
Why would anyone want to change what they want? Because they find themselves dissatisfied with their circumstances, for any reason. As imperfect agents, humans can’t entirely conclude the end consequences of all of their desires… the best they can usually do is, on finding that those desires have led them to a place they do not want to be, desire to change those circumstances, and in realizing that their old desires were responsible for the undesired circumstances, attempt to abandon the old desires in favor of a new set of desires, which might be more useful.
Why and how does anyone ever “do something they know they shouldn’t”, or “want something they know is wrong”? That’s easy, once the idea of morality as social cover against apparent selfishness is granted. The person in question has selfish desires, but has convinced themselves that their selfishness is wrong, and that they should be acting altruistically instead. However, as a path of pure altruism has poor overall fitness, they must perform selfish actions instead on occasion, and the more selfish those actions are, without being overtly selfish enough to incur the tribe’s displeasure, the better it is for the individual. Accordingly, any action selfish enough to seem immoral will be something a person will have reason to want, and do, despite the fact that it seems to be wrong.
Does the notion of morality-as-preference really add up to moral normality? I do not believe that there is such a thing as a normal morality; morality is merely a set of cultural ideas devised to limit the degree to which the selfishness of individuals can harm the tribe. The specifics of morality can and do vary widely given a different set of tribal circumstances, and are not on the same level as individual preferences… but this in no way implies that morality is in any sense an absolute, determined by anything at all outside of reality. If morality is a preference, it is a group preference, shaped by those factors which are best for the tribe, not for the individuals within it.
People use both moralistic and altruistic claims hypocritically, but altruism and moralism aren’t the same thing. Morality is based on the idea that people deserve one thing or another (whether riches or imprisonment); what they deserve is in some sense part of the objective world: moral judgments are thought to be objective truths.
For the argument that morality is a social error that most of us would be better off were it abolished, a good source is Ian Hinckfuss, “The Moral Society.” AND my “morality series.”
Uh… did you just go through my old comments and upvote a bunch of them? If so, thanks, but… that really wasn’t necessary.
It’s almost embarrassing in the case of the above; it, like much of the other stuff that I’ve written at least one year ago, reads like an extended crazy rant.
I read some of your posts because, having agreed with you on some things, I wondered whether I would agree on others. Actually, I didn’t check the date. When I read a post I want to approve of, I don’t worry whether it’s old.
If I see a post like this one espousing moral anti-realism intelligibly, I’m apt to upvote it. Most of the posters are rather dogmatic preference utilitarians.
Sorry I embarrassed you.
No worries; it’s just that here, in particular, you caught the tail end of my clumsy attempts to integrate my old Objectivist metaethics with what I’d read thus far in the Sequences. I have since reevaluated my philosophical positions… after all, tidy an explanation as it may superficially seem, I no longer believe that the human conception of morality can be entirely based on selfishness.