Solution to the free will homework problem
Debates on free will often rely on questions like “Could I have eaten something different for breakfast today?”. We focused on the subproblem of finding an algorithm that answers “Yes” to that question and which would therefore—if implemented in the human brain—power the intuitions for one side of the free will debate. We came up with an algorithm that seemed reasonable but we are much less sure about how closely it resembles the way humans actually work.
The algorithm is supposed to answer questions of the form “Could X have happened?” for any counterfactual event X. It does this by searching for possible histories of events that branch off from the actual world at some point and end with X happening. Here, “possible” means that the counterfactual history doesn’t violate any knowledge you have which is not derived from the fact that that history didn’t happen. To us, this seemed like an intuitive algorithm to answer such questions and at least related to what we actually did when we tried to answer them but we didn’t justify it beyond that.
The second important ingredient is that the exact decision procedure you use is unknown to the part of you that can reason about yourself. Of course you know which decisions you made in which situations in the past. But other than that, you don’t have a reliable way to predict the output of your decision procedure for any given situation.
Faced with the question “Could you have eaten something different for breakfast today?”, the algorithm now easily finds a possible history with that outcome. After all, the (unknown) decision procedure outputting a different decision is consistent with everything you know except for the fact that it did not in fact do so—which is ignored for judging whether counterfactuals “could have happened”.
Questions we haven’t (yet) talked about:
Does this algorithm for answering questions about counterfactuals give intuitive results if applied to examples (we only tried very few)? Otherwise, it can’t be the one used by humans since it would be generating those intuitions if it were
What about cases where you can be pretty sure you wouldn’t choose some action without knowledge of the exact decision procedure? (e.g. “Could you have burned all that money instead of spending it?”)
You can use your inner simulator to imagine yourself in some situation and predict which action you would choose. How does that relate to being uncertain about your decision procedure?
So even though I think our proposed solution contains some elements that are helpful for dissolving questions about free will, it’s not complete and we might discuss it again at some point.