Consequentialist elucidation of reasons for approving or disapproving of a given policy (virtue) is an effective persuasion technique if your values are actually right (for the people you try to confer them on), and it doesn’t engage the same parts of your brain that make moralizing undesirable.
This does not match my observations.
More generally I find that I do not trust other people’s explicit reasoning more than I trust their other forms of intelligence. For example I would never use this description:
What happens here is transfer of responsibility for important tasks from the imperfect machinery that historically used to manage them (with systematic problems in any given context that humans but not evolution can notice), to explicit reasoning.
We aren’t moving away from imperfect machinery here. We’re just moving to a different part. And a part that some suggest exist primarily for the purpose of constructing bullshit.
There is a potential for improving our moral judgement via explicit reasoning but that improvement isn’t something I would expect from most people who make the shift—where by ‘expect’ I am mostly just talking about how I have perceived the behaviour of intelligent people when engaging in explicit moral reasoning. It takes a lot of training before you can even catch up with your ‘default’ (some of which Vladimir alluded to).
There is a potential for improving our moral judgement via explicit reasoning but that improvement isn’t something I would expect from most people who make the shift
Hence the importance of making sure your new mode of reasoning is trustworthy before shifting the load to it, and continuing to pay attention to what the older modes of reasoning tell you even if you no longer obey them blindly. And the difficulty of doing this on your own calls for institutional tools, such as textbooks, training programs, or community groups.
In my previous comment, I’ve been concerned with contrasting the function of moralization, as is stressed here, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.
More generally I find that I do not trust other people’s explicit reasoning more than I trust their other forms of intelligence.
The big problem is that explicit reasoning is at least as often used for rationalizing pre-existing beliefs as for developing correct (or at least, more correct) beliefs. This is also something to watch for carefully in your own thinking. Beware when your explicit reasoning tells you something you want to hear.
This does not match my observations.
More generally I find that I do not trust other people’s explicit reasoning more than I trust their other forms of intelligence. For example I would never use this description:
We aren’t moving away from imperfect machinery here. We’re just moving to a different part. And a part that some suggest exist primarily for the purpose of constructing bullshit.
There is a potential for improving our moral judgement via explicit reasoning but that improvement isn’t something I would expect from most people who make the shift—where by ‘expect’ I am mostly just talking about how I have perceived the behaviour of intelligent people when engaging in explicit moral reasoning. It takes a lot of training before you can even catch up with your ‘default’ (some of which Vladimir alluded to).
Hence the importance of making sure your new mode of reasoning is trustworthy before shifting the load to it, and continuing to pay attention to what the older modes of reasoning tell you even if you no longer obey them blindly. And the difficulty of doing this on your own calls for institutional tools, such as textbooks, training programs, or community groups.
In my previous comment, I’ve been concerned with contrasting the function of moralization, as is stressed here, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.
The big problem is that explicit reasoning is at least as often used for rationalizing pre-existing beliefs as for developing correct (or at least, more correct) beliefs. This is also something to watch for carefully in your own thinking. Beware when your explicit reasoning tells you something you want to hear.