What should our containers do?

If we don’t have free will, we’re just minds stuck inside (biological) robot bodies and brains. Some argue that this would mean that there’s no point in trying to determine which courses of action are right, or what facts are true, since we wouldn’t have control over how we behave or what we believe. In other words, we can ignore the possibility of being stuck in robot bodies because if it’s true, nothing matters anyway. Let’s call this the “unimportant possibility” conjecture. It’s like assuming that there’s not something weird like a false vacuum bubble wall hurtling towards us since there’d be nothing we could do anyway to stop it from destroying Earth almost instantly[1].

But that’s a confusion of levels. Sure, it doesn’t matter what I, Richard’s experiencing mind, thinks about free will. But it sure matters what Richard’s brain thinks and how it makes Richard’s body behave. Richard could end up hurting people if he started thinking the wrong things, and that would cause their minds to suffer. And since all the convincing and arguing that goes on in the world happens between these brains and bodies, the “unimportant possibility” conjecture doesn’t apply. Richard’s brain should use its body to type words and try to explain this idea to other brains (and their minds will listen in). Imagine you’re watching a movie, and someone is trying to convince the main character that they have no free will. You may be chanting “don’t listen, you have free will!” under your breath. It doesn’t matter what you think as the viewer (it will have no impact on the plot of the movie), but it does matter what the character thinks.

Moreover, it could be important for all these brains and bodies to understand that there are minds housed inside of them that have feelings, and are ultimately what matter. For example, maybe science could one day unlock the ability for minds to send messages back to brains. And this discovery would only be found and utilized effectively once the brains understand that these minds exist as separate entities. It’s actually rather defeatist to assume that there will never, in the billions of years of the future, be a way for minds to speak back.[2]

There are lots of other ways this knowledge of the nonexistence of free will could matter, especially if the link between brains and minds isn’t perfect. It could turn out that in some when the brain experiences a set of specific positive feelings, the mind experiences negative feelings. In this case, the brains could avoid those sets of positive feelings in order to spare the minds pain.

What does this all mean, practically? I claim that we should view the chance of having no free will, not as an “unimportant possibility”, but as a “very important possibility”. I claim that it’s an important problem to solve and that depending on our understanding of it, rationality dictates that our behavior should change.

  1. ^

    Not a perfect analogy, since we could still “eat, drink, and be merry” before we die. If we can’t make decisions at all, this comfort isn’t open to us.

  2. ^

    Or maybe they already have some limited communication back, and that’s why brains are thinking about things like minds in the first place. As I’ve paid more attention to my own sensations of free will, I notice that a lot of the time I don’t actually have the sensation at all, and I’m just letting myself run on autopilot. This isn’t just for manual tasks like eating. This happens during conversations, programming, and other intellectual tasks. Truly intelligent decisions and more deliberate, thought-out choices seem to feel more free-will-ish to me, but further introspection is required here. It could also be that my mind is just observing multiple layers of intelligence interacting with each other, not observing the difference between the mind and the brain assuming control. What would it actually feel like to make a decision anyway?