@Eli: All attempts to justify an ethical theory take place against a background of what-constitutes-justification. You, for example, seem to think that calling something “universally instrumental” constitutes a justification for it being a terminal value, whereas for me this is a nonstarter. For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X. I do indeed have a word for theories that deny this: I call them “attempts to persuade an ideal philosopher of perfect emptiness”.
I think that there are quite canonical justifications for particular axiological statements. I should make these on my own blog, because that’s where they belong.
Your argument that “For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X” is true if you regard “mind” as an abstract turing machine, but false if you regard it as an embodied agent. For example, you will not find an agent who thinks that it should delete itself immediately, though it would be possible for a disembodied mind to think this. Reality itself breaks the symmetry of abstract computations.
There are substantial things we can say about what real world agents are likely to think or do, purely based on them being agents. We can also say things about what real world societies of agents are likely to think, purely based on them being societies.
I think that there are shades of grey between “An argument that is universally compelling”, and “An argument that compels only those who already believe its conclusion”. The former is clearly impossible, the latter is what you have given.
@Eli: All attempts to justify an ethical theory take place against a background of what-constitutes-justification. You, for example, seem to think that calling something “universally instrumental” constitutes a justification for it being a terminal value, whereas for me this is a nonstarter. For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X. I do indeed have a word for theories that deny this: I call them “attempts to persuade an ideal philosopher of perfect emptiness”.
I think that there are quite canonical justifications for particular axiological statements. I should make these on my own blog, because that’s where they belong.
Your argument that “For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X” is true if you regard “mind” as an abstract turing machine, but false if you regard it as an embodied agent. For example, you will not find an agent who thinks that it should delete itself immediately, though it would be possible for a disembodied mind to think this. Reality itself breaks the symmetry of abstract computations.
There are substantial things we can say about what real world agents are likely to think or do, purely based on them being agents. We can also say things about what real world societies of agents are likely to think, purely based on them being societies.
I think that there are shades of grey between “An argument that is universally compelling”, and “An argument that compels only those who already believe its conclusion”. The former is clearly impossible, the latter is what you have given.