Is this argument about determinism and moral judgment flawed?
If determinism is true, then whatever can be done actually is done. (Definition)
Whatever should be done, can be done. (Well-known “ought implies can” principle)
If determinism is true, then whatever ought to be done actually is done (from 1, 2).
The context is that it appears to me that people reject determinism largely because they’re committed to certain moral positions that are incompatible with determinism. Perhaps I will write a longer post about this.
The “can” in Line 2 refers to logical possibility.
At least, I think that’s that’s true of Kant’s “ought implies can” principle.
The “can” in Line 1 refers to physical possibility.
The argument is sound only if the two “can”s refer to the same modality.
You could replaced the “can” in Line 1 with logical possibility, and then the argument would be valid. The view that whatever can logically be done actually is done is called Necessitarianism. It’s pretty fringe.
Alternatively, you could replace the “can” in Line 2 with physical possibility, and then the argument would be valid. I don’t know if that view has a name, it seems pretty implausible.
No I think Kant’s “ought implies can” principle usually uses “can” to mean some kind of “practical possibility” that means “possible given your powers and opportunities” or something. And whatever is possible in that sense is also physically possible (i.e. “possible given the actual state of the world and physical laws”). So the argument is still sound.
Ought to be done⊆Can be done⊆Actually done⇒Ought to be done⊆Actually done
My fuzzy intuition would be to reject Ought to be done⊆Can be done (step 2 of your argument) if we accept determinism. And my actually philosophical position would be that these types of questions are not very useful and generally downstream of more fundamental confusions.
In fact the argument is basically the same I think. And I know Michael Huemer has a post using it in the modus ponens form to write a proof of free will presuming moral realism.
(MFT is his “minimal free-will thesis”: least some of the time, someone has more than one course of action that he can perform).
1.
With respect to the free-will issue, we should refrain from believing falsehoods. (premise)
2.
Whatever should be done can be done. (premise)
3.
If determinism is true, then whatever can be done, is done. (premise)
4.
I believe MFT. (premise)
5.
With respect to the free-will issue, we can refrain from believing falsehoods. (from 1,2)
6.
If determinism is true, then with respect to the free will issue, we refrain from believing falsehoods. (from 3,5)
7.
If determinism is true, then MFT is true. (from 6,4)
This man’s modus ponens is definitely my modus tollens. It seems super cursed to use moral premises to answer metaphysics problems. In this argument, except for step 8, you can replace belief in free will with anything, and the argument says that determinism implies that any widely held belief is true.
“Ought implies can” should be something that’s true by construction of your moral system, rather than something you can just assert about an arbitrary moral system and use to derive absurd conclusions.
I suspect that “ought implies can” comes from legal/compatibilist thinking, ie. you can do something if it is generally within your powers, and you are not being actively compelled to do otherwise.
Some thoughts on reconciling physical determinism with morality —
The brains of agents are where those agents’ actions are calculated. Although agents are physically determined, they can be arbitrarily computationally intractable, so there is no general shortcut to predict their actions with physics-level accuracy. If you want to predict what agent Alice does in situation X, you have to actually put Alice in situation X and observe. (This differentiates agents from things like billiard-balls, which are computationally tractable and can be predicted using simple physics equations.)
And yet, one input to an agent’s decision process is its prediction of other agents’ responses to the actions the agent is considering. Since agents are hard to predict, a lot of computation has been spent on doing this! And although Alice cannot in general and with physics-level accuracy predict Bob’s responses to her actions, there are a lot of common regularities in the pattern of agents’ responses to other agents’ actions.
Some of these regularities have to do with things like “this agent supports or opposes that agent’s actions” or “these agents join together to support or oppose that agent’s actions” or “this agent alters the incentive structure under which another agent decides its actions” or “this group of agents are cooperating on achieving a common goal” or “this agent aims to stop that agent from existing, while that agent aims to keep existing” and other relatively compactly-describable sorts of things.
Even though “Alice wants to live” is not a physics-level description of Alice, it is still useful for predicting Alice’s actions at a more abstract level. Alice is not made of wanting-to-live particles, but Alice reliably refrains from jumping off cliffs or picking fights with tigers; instead she cooperates with other agents towards common goals of supporting one another’s continued living, and so on.
And things like morality make sense at that level, describing regularities in inter-agent behavior at a much higher level than physical determinism; much as an operating system’s scheduler operates at a much higher level than logic gates.
Partly for the reasons outlined in my comment here. Mainly the following section:
Even under the most hardcore determinism and assuming immutable agents, they can be classified into those that would and those that wouldn’t have performed that act and so there is definitely some sort of distinction to be made.
In another comment (that I’m not finding after some minutes of search) I outline why this distinction is one that should be (and is) called moral culpability for all practical and most philosophical purposes. The few exceptions aren’t relevant here, since even one counterexample renders the argument invalid.
Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of “could not have done otherwise” implied by 2⁄3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.
I think that makes as much sense as “Whatever ought to be done can actually be done”. Do you have some argument that makes sense of one but not the other?
It makes intuitive sense to me to say that if you have no way to do something, then it’s nonsensical to say that you should do that thing. For example, if I say that you should have arrived to an appointment on time and you say that it would be impossible because I only told you about it an hour ago and it’s 1000 miles away, then it would be nonsensical for me to say that you should have arrived on time anyway. This is equivalent to saying that if you should do something, then you can do it.
The converse “Whatever ought to be avoided can actually be done” doesn’t make sense because there’s no equivalent intuition.
If I have no way to do something, then it’s nonsensical to say that I should avoid doing that thing. For example, if you say that I should have avoided arriving to an appointment on time and I say that it would be impossible because you only told me about it an hour ago and it’s 1000 miles away, then it would be nonsensical for you to say that I should have avoided arriving in time anyway. This is equivalent to saying that if I should avoid doing something, then I can do it.
I don’t think this premise is as intuitive. For example, if someone said that a quadriplegic should have saved a nearby drowning child, then the objective appears immediately this it wouldn’t have been possible and so the “should” claim isn’t reasonable. On the other hand, if you say that the quadriplegic should avoid intentionally drowning the child, I don’t think that’s clearly nonsensical or false.
Since agents are running under computational constraints, so there are many ought statements which might not happen, e.g. due to chaotic systems. So in practice even in a deterministic universe agents can’t guarantee that ought → can.
Is this argument about determinism and moral judgment flawed?
If determinism is true, then whatever can be done actually is done. (Definition)
Whatever should be done, can be done. (Well-known “ought implies can” principle)
If determinism is true, then whatever ought to be done actually is done (from 1, 2).
The context is that it appears to me that people reject determinism largely because they’re committed to certain moral positions that are incompatible with determinism. Perhaps I will write a longer post about this.
hmm, I think the argument isn’t valid:
The “can” in Line 2 refers to logical possibility.
At least, I think that’s that’s true of Kant’s “ought implies can” principle.
The “can” in Line 1 refers to physical possibility.
The argument is sound only if the two “can”s refer to the same modality.
You could replaced the “can” in Line 1 with logical possibility, and then the argument would be valid. The view that whatever can logically be done actually is done is called Necessitarianism. It’s pretty fringe.
Alternatively, you could replace the “can” in Line 2 with physical possibility, and then the argument would be valid. I don’t know if that view has a name, it seems pretty implausible.
No I think Kant’s “ought implies can” principle usually uses “can” to mean some kind of “practical possibility” that means “possible given your powers and opportunities” or something. And whatever is possible in that sense is also physically possible (i.e. “possible given the actual state of the world and physical laws”). So the argument is still sound.
In other words:
Ought to be done⊆Can be done⊆Actually done⇒Ought to be done⊆Actually done
My fuzzy intuition would be to reject Ought to be done⊆Can be done (step 2 of your argument) if we accept determinism. And my actually philosophical position would be that these types of questions are not very useful and generally downstream of more fundamental confusions.
What fundamental confusions?
This seems closely related to an argument I vaguely remember from a philosophy class:
A person is not morally culpable of something if they could not have done otherwise
If determinism is true, there is only one thing a person could do
If there is only one thing a person could do, they could not have done otherwise
If determinism is true, whatever someone does, they are not morally culpable
In fact the argument is basically the same I think. And I know Michael Huemer has a post using it in the modus ponens form to write a proof of free will presuming moral realism.
(MFT is his “minimal free-will thesis”: least some of the time, someone has more than one course of action that he can perform).
This man’s modus ponens is definitely my modus tollens. It seems super cursed to use moral premises to answer metaphysics problems. In this argument, except for step 8, you can replace belief in free will with anything, and the argument says that determinism implies that any widely held belief is true.
“Ought implies can” should be something that’s true by construction of your moral system, rather than something you can just assert about an arbitrary moral system and use to derive absurd conclusions.
I suspect that “ought implies can” comes from legal/compatibilist thinking, ie. you can do something if it is generally within your powers, and you are not being actively compelled to do otherwise.
Yes I agree to be clear.
Some thoughts on reconciling physical determinism with morality —
The brains of agents are where those agents’ actions are calculated. Although agents are physically determined, they can be arbitrarily computationally intractable, so there is no general shortcut to predict their actions with physics-level accuracy. If you want to predict what agent Alice does in situation X, you have to actually put Alice in situation X and observe. (This differentiates agents from things like billiard-balls, which are computationally tractable and can be predicted using simple physics equations.)
And yet, one input to an agent’s decision process is its prediction of other agents’ responses to the actions the agent is considering. Since agents are hard to predict, a lot of computation has been spent on doing this! And although Alice cannot in general and with physics-level accuracy predict Bob’s responses to her actions, there are a lot of common regularities in the pattern of agents’ responses to other agents’ actions.
Some of these regularities have to do with things like “this agent supports or opposes that agent’s actions” or “these agents join together to support or oppose that agent’s actions” or “this agent alters the incentive structure under which another agent decides its actions” or “this group of agents are cooperating on achieving a common goal” or “this agent aims to stop that agent from existing, while that agent aims to keep existing” and other relatively compactly-describable sorts of things.
Even though “Alice wants to live” is not a physics-level description of Alice, it is still useful for predicting Alice’s actions at a more abstract level. Alice is not made of wanting-to-live particles, but Alice reliably refrains from jumping off cliffs or picking fights with tigers; instead she cooperates with other agents towards common goals of supporting one another’s continued living, and so on.
And things like morality make sense at that level, describing regularities in inter-agent behavior at a much higher level than physical determinism; much as an operating system’s scheduler operates at a much higher level than logic gates.
Things like morality, such as economics describe behaviour. Morality, however, is normative.
It should not come as a surprise that reductionism doesn’t require you to abandon all high level concepts.
Yes, an obvious flaw is that 1 is obviously false. Though also 2 is false depending upon exactly how you view the term “a person”.
Why is (1) obviously false?
Partly for the reasons outlined in my comment here. Mainly the following section:
In another comment (that I’m not finding after some minutes of search) I outline why this distinction is one that should be (and is) called moral culpability for all practical and most philosophical purposes. The few exceptions aren’t relevant here, since even one counterexample renders the argument invalid.
Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of “could not have done otherwise” implied by 2⁄3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.
By analogous reasoning, if determinism is true, then whatever ought not to be done also actually is done.
Why? If you’re taking as a premise that “Whatever ought not to be done can actually be done” then I don’t think that makes sense.
I think that makes as much sense as “Whatever ought to be done can actually be done”. Do you have some argument that makes sense of one but not the other?
It makes intuitive sense to me to say that if you have no way to do something, then it’s nonsensical to say that you should do that thing. For example, if I say that you should have arrived to an appointment on time and you say that it would be impossible because I only told you about it an hour ago and it’s 1000 miles away, then it would be nonsensical for me to say that you should have arrived on time anyway. This is equivalent to saying that if you should do something, then you can do it.
The converse “Whatever ought to be avoided can actually be done” doesn’t make sense because there’s no equivalent intuition.
The analogous argument would be:
If I have no way to do something, then it’s nonsensical to say that I should avoid doing that thing. For example, if you say that I should have avoided arriving to an appointment on time and I say that it would be impossible because you only told me about it an hour ago and it’s 1000 miles away, then it would be nonsensical for you to say that I should have avoided arriving in time anyway. This is equivalent to saying that if I should avoid doing something, then I can do it.
I don’t think this premise is as intuitive. For example, if someone said that a quadriplegic should have saved a nearby drowning child, then the objective appears immediately this it wouldn’t have been possible and so the “should” claim isn’t reasonable. On the other hand, if you say that the quadriplegic should avoid intentionally drowning the child, I don’t think that’s clearly nonsensical or false.
“You should have taken every opportunity that you could to get there on time.”
“I did. I had zero opportunities to do so, and I took all zero of them.”
Since agents are running under computational constraints, so there are many ought statements which might not happen, e.g. due to chaotic systems. So in practice even in a deterministic universe agents can’t guarantee that ought → can.