Free will as an appearance to others

Free will

Consider creatures. This is really hard to define in general, but for now let’s just consider biological creatures. They are physical systems.

An effectively deterministic system, or an apparent machine, is a system whose behavior can be predicted by the creature making the judgment easily (using only a little time/​energy) from its initial state and immediate surroundings.

An effectively teleological system, or an apparent agent, is a system whose behavior cannot be predicted as above, but whose future state can be predicted in some sense.

In what sense though, needs work: if I can predict that you would eat food, but not how, that should count. If I can predict you would eat chocolate at 7:00, though I don’t know how you would do that, that might count as less free. Perhaps something like information-theoretic “surprise”, or “maximizing entropy”? More investigation needed.

Basically, an apparent machine is somebody that you can predict very well, and an apparent agent is somebody that you can predict only to a limited, big-picture way.

A successful agent needs to figure out what other agents are going to do. But it’s too hard to model them as apparent machines, just because how complicated creatures are. It’s easier to model them as apparent agents.

Apparent agents are apparently free: they aren’t apparently deterministic.

Apparent agents are willful: they do actions.

Thus, apparent agents apparently have free will. To say someone “has free will” means that someone is a creature that does things in a way you can’t predict in detail but can somewhat in outcome. Machines can be willful or not, but they are not free.

In this theory, free will becomes a property that is not possessed by creatures themselves, but by creatures interacting with other creatures.

Eventually, some creatures evolved to put this line of thought to their self, probably those animals that are very social and need to think about their selves constantly, like humans.

And that’s how humans think they themselves have free will.

Perhaps all complicated systems that can think are always too complicated to predict themselves, as such, they would all consider themselves to have free will.

From free to unfree

With more prediction power, a creature could modeling other creatures as apparent machines, instead of apparent agents. This is how humans have been treating other animals, actually. Descartes is a famous example. But all creatures can be machines, for someone with enough computing power.

Thinking of some creature as a machine to operate with instead of an agent to negotiate with, is usually regarded as psychopathic. Most psychopathic humans are so not due to an intelligent confidence in predicting other humans, but because of their lack of empathy/​impulse control, caused by some environmental/​genetic/​social/​brain abnormality.

But psychopathic modeling of humans can happen in an intelligent, honest way, if someone (say, a great psychologist) becomes so good at modeling humans that the other humans are entirely predictable to him.

This has been achieved in a limited way in advertisement companies and attention design and politics. The 2016 American election manipulation by Cambridge Analytica shows honest psychopathy. It will become more prevalent and more subtle, since overt manipulation makes humans deliberately become less predictable as a defense.

Emotionally intelligent robots/​electronic friends could become benevolent psychopaths. They will be (hopefully) benevolent, or at least be designed to be. They will be more and more psychopathic (not in the usual “evil” sense, I emphasize) if they become better at understanding humans. This is one possibility for humans to limit the power of their electronic friends, out of an unwillingness to be modelled as machines instead of agents.