Usually I don’t talk about “free will” at all, of course! That would be asking for trouble—no, begging for trouble—since the other person doesn’t know about my redefinition.
Boy, have we ever seen that illustrated in the comments on your last two posts; just replace “know” with “care”. I think people have been reading their own interpretations into yours, which is a shame: your explanation as the experience of a decision algorithm is more coherent and illuminating than my previous articulation of the feeling of free will (i.e. lack of feeling of external constraint). Thanks for the new interpretation.
Hopefully Anonymous:
If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience.
Do you then think that moral deliberation is characteristically different from strategic deliberation? If so, then I partially agree, and I think this might be the crux of your objection: that in moral decisions, we often hide our real objectives from our conscious selves, and look to justify those hidden motives. While in chess, there’s very little sense of “looking for a reason to move the rook” as a high priority, the sort of motivated cognition this describes is pretty ubiquitous in human moral decision.
However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals far better than a random decision, and that’s best explained by the running of some decision algorithm. The fact that the goals we pursue aren’t always the ones we state— even to ourselves— doesn’t prevent this from being a real deliberation; it just means that our experience of the deliberation is false to the reality of it.
Usually I don’t talk about “free will” at all, of course! That would be asking for trouble—no, begging for trouble—since the other person doesn’t know about my redefinition.
Boy, have we ever seen that illustrated in the comments on your last two posts; just replace “know” with “care”. I think people have been reading their own interpretations into yours, which is a shame: your explanation as the experience of a decision algorithm is more coherent and illuminating than my previous articulation of the feeling of free will (i.e. lack of feeling of external constraint). Thanks for the new interpretation.
Hopefully Anonymous:
If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience.
Do you then think that moral deliberation is characteristically different from strategic deliberation? If so, then I partially agree, and I think this might be the crux of your objection: that in moral decisions, we often hide our real objectives from our conscious selves, and look to justify those hidden motives. While in chess, there’s very little sense of “looking for a reason to move the rook” as a high priority, the sort of motivated cognition this describes is pretty ubiquitous in human moral decision.
However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals far better than a random decision, and that’s best explained by the running of some decision algorithm. The fact that the goals we pursue aren’t always the ones we state— even to ourselves— doesn’t prevent this from being a real deliberation; it just means that our experience of the deliberation is false to the reality of it.