Eliezer’s standard use of ‘logical’ takes the ‘abstract’ part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is ‘logical,’ whereas reasoning about concrete things-in-the-world is ‘physical.’
I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we’re thinning down a universe of possible models using axioms.
One thing I didn’t go into, in this epistemology sequence, is the notion of ‘effectiveness’ or ‘formality’, which is important but I didn’t go into as much because my take on it feels much more standard—I’m not sure I have anything more to say about what constitutes an ‘effective’ formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain’s native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we’re done being effective, there’s still the question of whether we’re navigating to a part of the physical universe, or narrowing down mathematical models, and by ‘logical’ I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by ‘effective’ as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.
I also don’t claim to have given morality an effective description—my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms—but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you’ve introduced a bunch of “moving parts” for your metaethical theory:
moral arguments
implicit reasons-for-action
effective descriptions of reasons-for-action
utility function
But I don’t understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What’s the “controlling algorithm” that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?
As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can’t be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.
Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let’s figure out both how they are supposed to work internally, and how they are supposed to fit together?
my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
these reasons-for-action both have an effective description (descriptively speaking)
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
any idealized or normative version of them would still have an effective description (normatively speaking).
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
I like this splitup!
(From the great-grandparent.)
I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we’re thinning down a universe of possible models using axioms.
One thing I didn’t go into, in this epistemology sequence, is the notion of ‘effectiveness’ or ‘formality’, which is important but I didn’t go into as much because my take on it feels much more standard—I’m not sure I have anything more to say about what constitutes an ‘effective’ formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain’s native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we’re done being effective, there’s still the question of whether we’re navigating to a part of the physical universe, or narrowing down mathematical models, and by ‘logical’ I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by ‘effective’ as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.
I also don’t claim to have given morality an effective description—my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms—but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you’ve introduced a bunch of “moving parts” for your metaethical theory:
moral arguments
implicit reasons-for-action
effective descriptions of reasons-for-action
utility function
But I don’t understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What’s the “controlling algorithm” that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?
As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can’t be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.
Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let’s figure out both how they are supposed to work internally, and how they are supposed to fit together?
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?