Ah. The issue being, I think an ideal agent will have to behave meta-ethically, or not at all. Agent implies the presence of multiple agents; in a single-agent universe the distinction between morality and aesthetics collapses. A universe with a single human or single ideal agent in it is morally equivalent to a universe with only Clippy in it.
At least, this is a personal intuition I’m mediocre at unpacking. Certainly, from a naturalistic/evo-psych perspective, our notions of morality derive quite directly from our social functioning and our aesthetics. Icky things are impure-evil because we have some intuition that they will poison or damage us somehow; murder is evil because it hurts other human beings with whom we had social interactions. (In fact, killing a dehumanized/socially decontextualized human is usually considered wrong only by people who’ve significantly extrapolated and rationalized their ethics away from their moral intuitions!)
I think an ideal agent will have to behave meta-ethically, or not at all. [...] A universe with a single human or single ideal agent in it is morally equivalent to a universe with only Clippy in it.
I think your comment needs more unpacking than that. I don’t understand most of it, especially the sentences above.
in a single-agent universe the distinction between morality and aesthetics collapses.
What would prevent it from collapsing in a multiple-agent universe?
Certainly, from a naturalistic/evo-psych perspective, our notions of morality derive quite directly from...
Are you implying this is at odds with consequentialism, or something else? Consequentialism is compatible with ethical egoism, altruism, utilitarianism and many other moral philosophies, or, you could say those are subcategories of consequentialism. You have to have some terminal values for it to make any sense, but that doesn’t imply virtue ethics, which is a mistake you seem to be making in the OP.
You have to have some terminal values for it to make any sense, but that doesn’t imply virtue ethics, which is a mistake you seem to be making in the OP.
I’m not even trying to imply anything about virtue ethics in the OP :-/.
What would prevent it from collapsing in a multiple-agent universe?
I think that our concept of morality as distinct from aesthetics seems to be primarily a social thing. Morality is about how we handle other people, or at least some abstracted sense of an Other with real agency. People, certain animals, Nature, and God are thus all considered valid subjects for human morality to deal with, but we usually have no moral intuitions or even deductions about, say, paper-clips or boot laces as such.
A religious person might care that God legislates a proper order for tying your boot laces (it’s left shoe followed by right shoe, halahically speaking ;-)), but even they don’t normally have a preexisting, terminal moral value over the bootlaces themselves.
So, to sum up the unpacking, I think that on a psychological level, morality is fundamentally concerned with other people/agents and their treatment, it’s a social function.
In such situations, the only drawback is that naive consequentialism fails to consider consequences on the person acting (ie: me). Once I make that more virtue-ethical adjustment …
Consequentialism would cover you just fine, if you just happened to have any terminal values concerning you. Or, do you mean consequentialism implies too much computation for you? If so, using simpler moral heuristics is still consequentialism, if you predict it is useful to maximize your values in certain situations.
I think that our concept of morality as distinct from aesthetics seems to be primarily a social thing. Morality is about how we handle other people
Or animals, just like you said. It could also include how you handle your future or past self, and I don’t think that is about aesthetics. Alas, we seem to be arguing about definitions here, probably not very useful.
I thought the question was about normative, not descriptive ethics. Normative here meaning: how would an ideal agent behave?
Human beings are too messy to ask about their descriptive ethics in a simple survey question.
Ah. The issue being, I think an ideal agent will have to behave meta-ethically, or not at all. Agent implies the presence of multiple agents; in a single-agent universe the distinction between morality and aesthetics collapses. A universe with a single human or single ideal agent in it is morally equivalent to a universe with only Clippy in it.
At least, this is a personal intuition I’m mediocre at unpacking. Certainly, from a naturalistic/evo-psych perspective, our notions of morality derive quite directly from our social functioning and our aesthetics. Icky things are impure-evil because we have some intuition that they will poison or damage us somehow; murder is evil because it hurts other human beings with whom we had social interactions. (In fact, killing a dehumanized/socially decontextualized human is usually considered wrong only by people who’ve significantly extrapolated and rationalized their ethics away from their moral intuitions!)
I think your comment needs more unpacking than that. I don’t understand most of it, especially the sentences above.
What would prevent it from collapsing in a multiple-agent universe?
Are you implying this is at odds with consequentialism, or something else? Consequentialism is compatible with ethical egoism, altruism, utilitarianism and many other moral philosophies, or, you could say those are subcategories of consequentialism. You have to have some terminal values for it to make any sense, but that doesn’t imply virtue ethics, which is a mistake you seem to be making in the OP.
I’m not even trying to imply anything about virtue ethics in the OP :-/.
I think that our concept of morality as distinct from aesthetics seems to be primarily a social thing. Morality is about how we handle other people, or at least some abstracted sense of an Other with real agency. People, certain animals, Nature, and God are thus all considered valid subjects for human morality to deal with, but we usually have no moral intuitions or even deductions about, say, paper-clips or boot laces as such.
A religious person might care that God legislates a proper order for tying your boot laces (it’s left shoe followed by right shoe, halahically speaking ;-)), but even they don’t normally have a preexisting, terminal moral value over the bootlaces themselves.
So, to sum up the unpacking, I think that on a psychological level, morality is fundamentally concerned with other people/agents and their treatment, it’s a social function.
From the OP:
Consequentialism would cover you just fine, if you just happened to have any terminal values concerning you. Or, do you mean consequentialism implies too much computation for you? If so, using simpler moral heuristics is still consequentialism, if you predict it is useful to maximize your values in certain situations.
Or animals, just like you said. It could also include how you handle your future or past self, and I don’t think that is about aesthetics. Alas, we seem to be arguing about definitions here, probably not very useful.
Agents, is the thing.