Daniel, the foundational problem with meta-ethics (as done by philosophers) is that they start from the presumption that morality is something “out there”.
For non-consequentialists this seems to usually result in them either simply relying on a combination of intuition (not as much a fault in ethics as in other subjects, but we should try to do better) and axiomatic systems. When intuition collides with an axiomatic system, or different axiomatic systems contradict one another, they don’t have the ability to resolve the issue.
A moral prescription can be judged by how well it satisfies some goal. The goal is ultimately “arbitrary”—it is up to any person making a judgement about a prescriptive system. Seperating out prescriptions from goals is perhaps not logically necessary, but I think it is useful to distinguish between moral disagreements that can be eliminated through gaining and spreading knowledge (any disagreement assuming common goals) and those that can’t (goal disagreements).
Even when philosophers correctly recognise that a goal is necessary to judge prescriptions, they tend to think of some way of deriving a goal (typically some form of utilitarianism) as being objectively right. This leads to a tendency to deny evidence that their own personal judgements of prescriptive systems (and those of others) in fact derive from different goals. It seems to me, however, that most consequentialists haven’t properly distinguished between prescriptions and goals by which to judge prescriptions, which leads to more confusion (rule consequentialism is a clumsy attempt to get around this, but as commonly understood it is not very general, as a moral prescription need not be a set of simple general rules).
Daniel, the foundational problem with meta-ethics (as done by philosophers) is that they start from the presumption that morality is something “out there”.
For non-consequentialists this seems to usually result in them either simply relying on a combination of intuition (not as much a fault in ethics as in other subjects, but we should try to do better) and axiomatic systems. When intuition collides with an axiomatic system, or different axiomatic systems contradict one another, they don’t have the ability to resolve the issue.
A moral prescription can be judged by how well it satisfies some goal. The goal is ultimately “arbitrary”—it is up to any person making a judgement about a prescriptive system. Seperating out prescriptions from goals is perhaps not logically necessary, but I think it is useful to distinguish between moral disagreements that can be eliminated through gaining and spreading knowledge (any disagreement assuming common goals) and those that can’t (goal disagreements).
Even when philosophers correctly recognise that a goal is necessary to judge prescriptions, they tend to think of some way of deriving a goal (typically some form of utilitarianism) as being objectively right. This leads to a tendency to deny evidence that their own personal judgements of prescriptive systems (and those of others) in fact derive from different goals. It seems to me, however, that most consequentialists haven’t properly distinguished between prescriptions and goals by which to judge prescriptions, which leads to more confusion (rule consequentialism is a clumsy attempt to get around this, but as commonly understood it is not very general, as a moral prescription need not be a set of simple general rules).