In fact the argument is basically the same I think. And I know Michael Huemer has a post using it in the modus ponens form to write a proof of free will presuming moral realism.
(MFT is his “minimal free-will thesis”: least some of the time, someone has more than one course of action that he can perform).
1.
With respect to the free-will issue, we should refrain from believing falsehoods. (premise)
2.
Whatever should be done can be done. (premise)
3.
If determinism is true, then whatever can be done, is done. (premise)
4.
I believe MFT. (premise)
5.
With respect to the free-will issue, we can refrain from believing falsehoods. (from 1,2)
6.
If determinism is true, then with respect to the free will issue, we refrain from believing falsehoods. (from 3,5)
7.
If determinism is true, then MFT is true. (from 6,4)
This man’s modus ponens is definitely my modus tollens. It seems super cursed to use moral premises to answer metaphysics problems. In this argument, except for step 8, you can replace belief in free will with anything, and the argument says that determinism implies that any widely held belief is true.
“Ought implies can” should be something that’s true by construction of your moral system, rather than something you can just assert about an arbitrary moral system and use to derive absurd conclusions.
I suspect that “ought implies can” comes from legal/compatibilist thinking, ie. you can do something if it is generally within your powers, and you are not being actively compelled to do otherwise.
Some thoughts on reconciling physical determinism with morality —
The brains of agents are where those agents’ actions are calculated. Although agents are physically determined, they can be arbitrarily computationally intractable, so there is no general shortcut to predict their actions with physics-level accuracy. If you want to predict what agent Alice does in situation X, you have to actually put Alice in situation X and observe. (This differentiates agents from things like billiard-balls, which are computationally tractable and can be predicted using simple physics equations.)
And yet, one input to an agent’s decision process is its prediction of other agents’ responses to the actions the agent is considering. Since agents are hard to predict, a lot of computation has been spent on doing this! And although Alice cannot in general and with physics-level accuracy predict Bob’s responses to her actions, there are a lot of common regularities in the pattern of agents’ responses to other agents’ actions.
Some of these regularities have to do with things like “this agent supports or opposes that agent’s actions” or “these agents join together to support or oppose that agent’s actions” or “this agent alters the incentive structure under which another agent decides its actions” or “this group of agents are cooperating on achieving a common goal” or “this agent aims to stop that agent from existing, while that agent aims to keep existing” and other relatively compactly-describable sorts of things.
Even though “Alice wants to live” is not a physics-level description of Alice, it is still useful for predicting Alice’s actions at a more abstract level. Alice is not made of wanting-to-live particles, but Alice reliably refrains from jumping off cliffs or picking fights with tigers; instead she cooperates with other agents towards common goals of supporting one another’s continued living, and so on.
And things like morality make sense at that level, describing regularities in inter-agent behavior at a much higher level than physical determinism; much as an operating system’s scheduler operates at a much higher level than logic gates.
Partly for the reasons outlined in my comment here. Mainly the following section:
Even under the most hardcore determinism and assuming immutable agents, they can be classified into those that would and those that wouldn’t have performed that act and so there is definitely some sort of distinction to be made.
In another comment (that I’m not finding after some minutes of search) I outline why this distinction is one that should be (and is) called moral culpability for all practical and most philosophical purposes. The few exceptions aren’t relevant here, since even one counterexample renders the argument invalid.
Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of “could not have done otherwise” implied by 2⁄3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.
This seems closely related to an argument I vaguely remember from a philosophy class:
A person is not morally culpable of something if they could not have done otherwise
If determinism is true, there is only one thing a person could do
If there is only one thing a person could do, they could not have done otherwise
If determinism is true, whatever someone does, they are not morally culpable
In fact the argument is basically the same I think. And I know Michael Huemer has a post using it in the modus ponens form to write a proof of free will presuming moral realism.
(MFT is his “minimal free-will thesis”: least some of the time, someone has more than one course of action that he can perform).
This man’s modus ponens is definitely my modus tollens. It seems super cursed to use moral premises to answer metaphysics problems. In this argument, except for step 8, you can replace belief in free will with anything, and the argument says that determinism implies that any widely held belief is true.
“Ought implies can” should be something that’s true by construction of your moral system, rather than something you can just assert about an arbitrary moral system and use to derive absurd conclusions.
I suspect that “ought implies can” comes from legal/compatibilist thinking, ie. you can do something if it is generally within your powers, and you are not being actively compelled to do otherwise.
Yes I agree to be clear.
Some thoughts on reconciling physical determinism with morality —
The brains of agents are where those agents’ actions are calculated. Although agents are physically determined, they can be arbitrarily computationally intractable, so there is no general shortcut to predict their actions with physics-level accuracy. If you want to predict what agent Alice does in situation X, you have to actually put Alice in situation X and observe. (This differentiates agents from things like billiard-balls, which are computationally tractable and can be predicted using simple physics equations.)
And yet, one input to an agent’s decision process is its prediction of other agents’ responses to the actions the agent is considering. Since agents are hard to predict, a lot of computation has been spent on doing this! And although Alice cannot in general and with physics-level accuracy predict Bob’s responses to her actions, there are a lot of common regularities in the pattern of agents’ responses to other agents’ actions.
Some of these regularities have to do with things like “this agent supports or opposes that agent’s actions” or “these agents join together to support or oppose that agent’s actions” or “this agent alters the incentive structure under which another agent decides its actions” or “this group of agents are cooperating on achieving a common goal” or “this agent aims to stop that agent from existing, while that agent aims to keep existing” and other relatively compactly-describable sorts of things.
Even though “Alice wants to live” is not a physics-level description of Alice, it is still useful for predicting Alice’s actions at a more abstract level. Alice is not made of wanting-to-live particles, but Alice reliably refrains from jumping off cliffs or picking fights with tigers; instead she cooperates with other agents towards common goals of supporting one another’s continued living, and so on.
And things like morality make sense at that level, describing regularities in inter-agent behavior at a much higher level than physical determinism; much as an operating system’s scheduler operates at a much higher level than logic gates.
Things like morality, such as economics describe behaviour. Morality, however, is normative.
It should not come as a surprise that reductionism doesn’t require you to abandon all high level concepts.
Yes, an obvious flaw is that 1 is obviously false. Though also 2 is false depending upon exactly how you view the term “a person”.
Why is (1) obviously false?
Partly for the reasons outlined in my comment here. Mainly the following section:
In another comment (that I’m not finding after some minutes of search) I outline why this distinction is one that should be (and is) called moral culpability for all practical and most philosophical purposes. The few exceptions aren’t relevant here, since even one counterexample renders the argument invalid.
Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of “could not have done otherwise” implied by 2⁄3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.