Comments on A critical agential account of free will, causation, and physics
Consider the statement: “I will take action A”. An agent believing this statement may falsify it by taking any action B not equal to A. Therefore, this statement does not hold as a law. It may be falsified at will.
We can imagine a situation where there is a box containing an apple or a pear. Suppose we believe that it contains a pear, but we believe it contains an apple. If we look in the box (and we have good reason to believe looking doesn’t change the contents), then we’ll falsfy our pear hypothesis. Similarly, if we’re told by an oracle that if we looked we would see a pear, then there’d be no need for us to actually look, we’d have heard enough to falsify our pear hypothesis.
However, the situation you’ve identified isn’t the same. Here you aren’t just deciding whether to make an observation or not, but what the value of that observation would be. So in this case, the fact that if you took action B you’d observe the action you took was B doesn’t say anything about the case where you don’t take action B, unlike knowing that if you looked in the box you’d see you an apple provides you information even if you don’t look in the box. It simply isn’t relevant unless you actually take B.
Interestingly, falsificationism takes agency (in terms of observations, computation, and action) as more basic than physics. For a thing to be falsifiable, it must be able to be falsified by some agent, seeing some observation. And the word able implies freedom.
I think it’s reasonable to suggest starting from falsification as our most basic assumption. I guess where you lose me is when you claim that this implies agency. I guess my position is as follows:
It seems like agents in a deterministic universe can falsify theories in at least some sense. Like they take two different weights drop them and see they land at the same time falsifying the fact that heavier objects fall faster
On the other hand, some like agency or counterfactuals seems necessary for talking about falsfiability in the abstract as this involves saying that we could falsify a theory if we ran an experiment that we didn’t.
In the second case, I would suggest that what we need is counterfactuals not agency. That is, we need to be able to say things like, “If I ran this experiment and obtained this result, then theory X would be falsified”, not “I could have run this experiment and if I did and we obtained this result, then theory X would be falsified”.
In other words, I think that there is something behind the intuition which I’m guessing led you to these views, but am in favour of developing it in a different direction than you.
I didn’t read past this point, not because I thought it was uninteresting, but because it already took me a while to figure out how to articulate my objections to the article up to this point and I still have to look at one of your posts. But let me know if there’s anything further down more directly related to whether counterfactuals are circular.
It seems like agents in a deterministic universe can falsify theories in at least some sense. Like they take two different weights drop them and see they land at the same time falsifying the fact that heavier objects fall faster
The main problem is that it isn’t meaningful for their theories to make counterfactual predictions about a single situation; they can create multiple situations (across time and space) and assume symmetry and get falsification that way, but it requires extra assumptions. Basically you can’t say different theories really disagree unless there’s some possible world / counterfactual / whatever in which they disagree; finding a “crux” experiment between two theories (e.g. if one theory says all swans are white and another says there are black swans in a specific lake, the cruxy experiment looks in that lake) involves making choices to optimize disagreement.
In the second case, I would suggest that what we need is counterfactuals not agency. That is, we need to be able to say things like, “If I ran this experiment and obtained this result, then theory X would be falsified”, not “I could have run this experiment and if I did and we obtained this result, then theory X would be falsified”.
Those seem pretty much equivalent? Maybe by agency you mean utility function optimization, which I didn’t mean to imply was required.
The part I thought was relevant was the part where you can believe yourself to have multiple options and yet be implemented by a specific computer.
Basically you can’t say different theories really disagree unless there’s some possible world / counterfactual / whatever in which they disagree;
Agreed, this is yet another argument for considering counterfactuals to be so fundamental that they don’t make sense outside of themselves. I just don’t see this as incompatible with determinism, b/c I’m grounding using counterfactuals rather than agency.
Those seem pretty much equivalent? Maybe by agency you mean utility function optimization, which I didn’t mean to imply was required.
I don’t mean utility function optimization, so let me clarify what as I see as the distinction. I guess I see my version as compatible with the determinist claim that you couldn’t have run the experiment because the path of the universe was always determined from the start. I’m referring to a purely hypothetical running with no reference to whether you could or couldn’t have actually run it.
Hopefully, my comments here have made it clear where we diverge and this provides a target if you want to make a submission (that said, the contest is about the potential circular dependency of counterfactuals and not just my views. So it’s perfectly valid for people to focus on other arguments for this hypothesis, rather than my specific arguments).
Comments on A critical agential account of free will, causation, and physics
We can imagine a situation where there is a box containing an apple or a pear. Suppose we believe that it contains a pear, but we believe it contains an apple. If we look in the box (and we have good reason to believe looking doesn’t change the contents), then we’ll falsfy our pear hypothesis. Similarly, if we’re told by an oracle that if we looked we would see a pear, then there’d be no need for us to actually look, we’d have heard enough to falsify our pear hypothesis.
However, the situation you’ve identified isn’t the same. Here you aren’t just deciding whether to make an observation or not, but what the value of that observation would be. So in this case, the fact that if you took action B you’d observe the action you took was B doesn’t say anything about the case where you don’t take action B, unlike knowing that if you looked in the box you’d see you an apple provides you information even if you don’t look in the box. It simply isn’t relevant unless you actually take B.
I think it’s reasonable to suggest starting from falsification as our most basic assumption. I guess where you lose me is when you claim that this implies agency. I guess my position is as follows:
It seems like agents in a deterministic universe can falsify theories in at least some sense. Like they take two different weights drop them and see they land at the same time falsifying the fact that heavier objects fall faster
On the other hand, some like agency or counterfactuals seems necessary for talking about falsfiability in the abstract as this involves saying that we could falsify a theory if we ran an experiment that we didn’t.
In the second case, I would suggest that what we need is counterfactuals not agency. That is, we need to be able to say things like, “If I ran this experiment and obtained this result, then theory X would be falsified”, not “I could have run this experiment and if I did and we obtained this result, then theory X would be falsified”.
In other words, I think that there is something behind the intuition which I’m guessing led you to these views, but am in favour of developing it in a different direction than you.
I didn’t read past this point, not because I thought it was uninteresting, but because it already took me a while to figure out how to articulate my objections to the article up to this point and I still have to look at one of your posts. But let me know if there’s anything further down more directly related to whether counterfactuals are circular.
The main problem is that it isn’t meaningful for their theories to make counterfactual predictions about a single situation; they can create multiple situations (across time and space) and assume symmetry and get falsification that way, but it requires extra assumptions. Basically you can’t say different theories really disagree unless there’s some possible world / counterfactual / whatever in which they disagree; finding a “crux” experiment between two theories (e.g. if one theory says all swans are white and another says there are black swans in a specific lake, the cruxy experiment looks in that lake) involves making choices to optimize disagreement.
Those seem pretty much equivalent? Maybe by agency you mean utility function optimization, which I didn’t mean to imply was required.
The part I thought was relevant was the part where you can believe yourself to have multiple options and yet be implemented by a specific computer.
Agreed, this is yet another argument for considering counterfactuals to be so fundamental that they don’t make sense outside of themselves. I just don’t see this as incompatible with determinism, b/c I’m grounding using counterfactuals rather than agency.
I don’t mean utility function optimization, so let me clarify what as I see as the distinction. I guess I see my version as compatible with the determinist claim that you couldn’t have run the experiment because the path of the universe was always determined from the start. I’m referring to a purely hypothetical running with no reference to whether you could or couldn’t have actually run it.
Hopefully, my comments here have made it clear where we diverge and this provides a target if you want to make a submission (that said, the contest is about the potential circular dependency of counterfactuals and not just my views. So it’s perfectly valid for people to focus on other arguments for this hypothesis, rather than my specific arguments).