Explaining Volition Without Resorting to Free Will
People often use free will to explain how we make choices, but have great difficulty explaining how free will itself works. Philosophers gesture towards ideas like “the capacity to choose” or “the freedom to do otherwise”, but these concepts just raise the same question to me: What are “capacity” and “freedom”? I suspect that the reason free will is so hard to explain is because it is not actually clarifying anything. It’s a fake explanation, like the physics textbook that says everything runs on energy.[1] “What makes the bicycle move?” Energy! “How do we make choices?” Free will! It doesn’t really answer anything.
However, most people do feel like they make choices. I think making choices is a real phenomenon, just that the answer isn’t “free will”. When I want to answer the question of how we make choices, I look at the process by which it occurs. For me, that consists of gathering information to determine the best action, and then performing that action. If you look at it from a purely mechanistic lens, choice-making is simply following some function from information to actions:
From this perspective, many things make choices that we might not traditionally consider to have free will. For example, my computer chooses which threads to schedule on which processors.
Nevertheless, there is definitely a difference between a computer and myself. Someone programmed the computer to make its choices, but I make my own choices. What explains the difference between us?
I think it’s simply a matter of self-reference and bootstrapping. The computer doesn’t write its own code (at least, not yet), but we modify our own choice functions. Namely, given certain kinds of information—such as that an action led to a suboptimal outcome—our choice functions output actions to modify themselves. When the choice function is sufficiently clever, it can even realize that these updates may be suboptimal, and bootstrap itself to an even cleverer choice function.
Eventually, the choice function is determined mostly by the modifications it has done to itself, and it starts to make sense to say the choice function is responsible for its own choices. I think this is the answer to the mystery behind the sense of ownership we have of our actions.
Volition doesn’t have to be mysterious.
- ^
“Judging Books by Their Covers,” Surely You’re Joking, Mr. Feynman!
nice. i don’t think it’s quite enough, though.
we could set up a pid controller to tune itself until it’s able to balance an inverted pendulum. this seems to meet your definition. are you comfortable granting such a system ‘volition’?
In my essay, I was using volition mostly as just a synonym for choice-making. So, it makes choices on which direction to push the pendulum. But maybe you are asking whether the PID controller “owns” the choices it makes? I would say it owns them more than a PID controller tuned by a human owns its choices, but less than a human owns his/her own choices. The human, after all, can modify how he/she goes about learning in the first place, while the PID controller you have described cannot modify its tuning algorithm.
I don’t think calling it a “choice function” really changes the mystery. Is it deterministic (based on brain configuration), or is there some non-physical force that’s making it “not deterministic, but not random”?
Personally, I think it’s mostly an illusion—it’s similar to the temperature setting in LLMs. It’s some amount of unpredictability, which may not be true randomness, but which is opaque to any observer due to the complexity of the underlying neurological (or electronic) processes. And there are lots of somewhat-more-introspectable structures which can constrain or influence the behaviors, and which try to explain them as “choices”.
I didn’t want to include this in the main post, because I wanted to keep it concise and on-topic, but I don’t think determinism is relevant. If you were given the options of receiving a million dollars or of receiving death, you would try damn well to make your choice as deterministic as possible. That doesn’t stop it from being a choice you make. Likewise, a computer can be given a source of entropy and run a random algorithm; computers need not be deterministic. Stochasticity doesn’t magically give you any more “real” choices than you had before.
I think people get the idea into their head about stochasticity being necessary for choosing because they consider a real choice to require the possibility to choose otherwise. So, they imagine a choice function like a brain split in two, sometimes choosing one way and sometimes choosing another:
However, I think this is the wrong model to have for the possibility of choosing otherwise. Instead, you should imagine that a different choice function in your place might choose a different action:
This solves the problem of capacity to choose otherwise without requiring stochasticity. That a choice is made is just a way of pointing out there is some choice function, a chooser, and that there exists a different chooser that would result in a different action. That it is one chooser and not another determining the action is the whole point of saying that that chooser, and not the other made the choice. In how I view choosing, choices are still made even if the universe is deterministic, it’s just that the choosers are determined beforehand in what places they will be. But that doesn’t make the concept of a chooser or a choice useless, no less than abstracting away a clump of particles as a rock is useless. We abstract clumps of particles as rocks because we can model rocks more simply than modelling all the particles one by one. We abstract away choosers because there are choosers, entities such as humans that take actions based on information they gather.
On the other hand, the question of whether someone is to be held responsible for their choices is a social problem.
So how is “choice function” different from “free will” in any significant externally-visible way? Both of them take information and brain state as inputs and an action as output. The concept of both includes counterfactual “path not taken” as meaningfully possible.
What’s the actual distinction that makes is a “choice function” rather than “free will”?
I think I would need you to explain what you mean by free will for me to be able to answer that.
I find “free will” to be an anti-useful concept. You can remove it from your vocabulary and you’ll never miss it. “Free will”, besides being confusing with its varying definitions and historical/religious baggage, pushes us to ask the wrong questions and focus on the wrong things. When someone else uses the concept in conversation/dialogue, ask what they mean by “free will” or why free will matters in the context.
Yep! That’s why I’m trying to explain choice-making without this “free will” concept.
It’s easy enough to avoid the phrase “free will” , but avoiding the concept is harder to avoid, not least because it’s actually several concepts.
Compatibilist free will is the lowest bar to clear. Almost any mechanism of choice would amount to CFW. So it’s not controversial apart from whether it’s what we centrally mean by free will.
Libertarian free will involves an addional ingredient , leeway or the ability to have done The ability to have done otherwise.doesn’t seem possible in a physically determined universe , leading to the worry that free will is a supernaturalistic process where an immaterial soul overrides the physical causality in the brain. Supernatural libertarian free will is easily refuted by naturalism.
That leaves naturalistic libertarian free will as the controversial case.
Philosophers don’t have much to say about the nature of the capacity to choose, but then it’s not what’s controversial. What’s controversial is the ability to have done otherwise—which is itself controversially linked to moral responsibility.
The ability to have done otherwise is easily possible in a undetermined universe , but these models have a series of worries about control and purposiveness.
Self modification doesn’t give you any CHDO at all—it’s quite compatible with determinism. In a deterministic universe, the progress of a self modifying mechanism is as determined as anything else.
But the mechanism could have an indeterministic element, in which case it coincides with libertarian free will. The right sort of mechanism could even resolve the worries about control and purpose.
ETA:
That’s one kind of case—where you are making a decision for personal benefit, and it’s very clear which way to go. There are also torn decisions , where you have desired in both directions, or your desired conflict with external morality, etc.
But do you know that? Surely establishing how the capacity for choice actually works requires empirical investigation.
It could stop it from having certain characteristics beyond being a choice.
Undetermined choices are more momentous , because an open, non-inevitable future depends on them.
Determinism allows you to cause the future in a limited sense. Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn’t allow you to control the future in any sense other than causing it. (and the sense in which you are causing the future is just the sense in which any future state depends on causes in he past—it is nothing special and nothing different from physical causation). It allows, in a purely theoretical sense “if I had made choice b instead of choice a, then future B would have happened instead of future A” … but without the ability to have actually chosen b.
Under determinism, you are a link in a deterministic chain that leads to a future state, so without you, the state will not happen … not that you have any choose use in the matter. You can’t stop or change the future because you can’t fail to make your choices, or make them differently. You can’t anything of your own, since everything about you and your choices was determined by at the time of the Big Bang. Under determinism , you are nothing special...only the BB is special.
(This is still true under many worlds. even though MWI implies that there is not a single inevitable future, it doesn’t allow you to influence the future in a way that makes future A more likely than future B , as a result of some choice you make now. Under MW determinism, the probabilities of A and B are what they are, and always were—before you make a decision, after you make a decision , and before you were born. You can’t choosee between them, even in the sense of adjusting the probabilities).
By contrast, Libertarian free will does allow the future to depend on decisions which are not themselves determined. That means there are valid statements of the form “if I had made choice b instead of choice a, then future B would have happened instead of future A”. And you actually could have made choice a or choice b....these are real possibilities, not merely conceptual or logical ones. That in turn means that the future is not inevitable, and can be shaped, but merely caused...a free agent can create or steer towards a variety of futures. For a free agent, doom does not have to be inevitable.
It’s like the difference between a car and a train. The train goes somewhere but it can’t jump off the tracks
In fact, determinists don’t even need the conditionals. Under determinism, you can think of sets of pre-existing agents, which make different decisions, or adopt different strategies determinstically, and you can make claims about what results they get, without any of them deciding anything or doing anything differently. That additional, non-redundant, sense of control is what would have been required to answer the concern that libertarians actually have about what determinism robs them of.
The situation is rather analogous to simulationism: a simulated universe might seem just like a real universe...but it isnt real. And a deterministic universe might seem to contain decisions and actions...but they are not decisions and actions in the fullest senses of the terms, because they don’t make a difference. So there is precedent for saying that two things can be different without being visibly different.
Almost everyone, including rationalists, implicitly believe they have the ability to control the future,to steer to better futures. In the case of rationalists, that is the motivation for AI safety and effective altruism.
How would you respond to my reply to Dagon?