Free will as an appearance to others

Free will

Con­sider crea­tures. This is re­ally hard to define in gen­eral, but for now let’s just con­sider biolog­i­cal crea­tures. They are phys­i­cal sys­tems.

An effec­tively de­ter­minis­tic sys­tem, or an ap­par­ent ma­chine, is a sys­tem whose be­hav­ior can be pre­dicted by the crea­ture mak­ing the judg­ment eas­ily (us­ing only a lit­tle time/​en­ergy) from its ini­tial state and im­me­di­ate sur­round­ings.

An effec­tively tele­olog­i­cal sys­tem, or an ap­par­ent agent, is a sys­tem whose be­hav­ior can­not be pre­dicted as above, but whose fu­ture state can be pre­dicted in some sense.

In what sense though, needs work: if I can pre­dict that you would eat food, but not how, that should count. If I can pre­dict you would eat choco­late at 7:00, though I don’t know how you would do that, that might count as less free. Per­haps some­thing like in­for­ma­tion-the­o­retic “sur­prise”, or “max­i­miz­ing en­tropy”? More in­ves­ti­ga­tion needed.

Ba­si­cally, an ap­par­ent ma­chine is some­body that you can pre­dict very well, and an ap­par­ent agent is some­body that you can pre­dict only to a limited, big-pic­ture way.

A suc­cess­ful agent needs to figure out what other agents are go­ing to do. But it’s too hard to model them as ap­par­ent ma­chines, just be­cause how com­pli­cated crea­tures are. It’s eas­ier to model them as ap­par­ent agents.

Ap­par­ent agents are ap­par­ently free: they aren’t ap­par­ently de­ter­minis­tic.

Ap­par­ent agents are willful: they do ac­tions.

Thus, ap­par­ent agents ap­par­ently have free will. To say some­one “has free will” means that some­one is a crea­ture that does things in a way you can’t pre­dict in de­tail but can some­what in out­come. Machines can be willful or not, but they are not free.

In this the­ory, free will be­comes a prop­erty that is not pos­sessed by crea­tures them­selves, but by crea­tures in­ter­act­ing with other crea­tures.

Even­tu­ally, some crea­tures evolved to put this line of thought to their self, prob­a­bly those an­i­mals that are very so­cial and need to think about their selves con­stantly, like hu­mans.

And that’s how hu­mans think they them­selves have free will.

Per­haps all com­pli­cated sys­tems that can think are always too com­pli­cated to pre­dict them­selves, as such, they would all con­sider them­selves to have free will.

From free to unfree

With more pre­dic­tion power, a crea­ture could mod­el­ing other crea­tures as ap­par­ent ma­chines, in­stead of ap­par­ent agents. This is how hu­mans have been treat­ing other an­i­mals, ac­tu­ally. Descartes is a fa­mous ex­am­ple. But all crea­tures can be ma­chines, for some­one with enough com­put­ing power.

Think­ing of some crea­ture as a ma­chine to op­er­ate with in­stead of an agent to ne­go­ti­ate with, is usu­ally re­garded as psy­cho­pathic. Most psy­cho­pathic hu­mans are so not due to an in­tel­li­gent con­fi­dence in pre­dict­ing other hu­mans, but be­cause of their lack of em­pa­thy/​im­pulse con­trol, caused by some en­vi­ron­men­tal/​ge­netic/​so­cial/​brain ab­nor­mal­ity.

But psy­cho­pathic mod­el­ing of hu­mans can hap­pen in an in­tel­li­gent, hon­est way, if some­one (say, a great psy­chol­o­gist) be­comes so good at mod­el­ing hu­mans that the other hu­mans are en­tirely pre­dictable to him.

This has been achieved in a limited way in ad­ver­tise­ment com­pa­nies and at­ten­tion de­sign and poli­tics. The 2016 Amer­i­can elec­tion ma­nipu­la­tion by Cam­bridge An­a­lyt­ica shows hon­est psy­chopa­thy. It will be­come more preva­lent and more sub­tle, since overt ma­nipu­la­tion makes hu­mans de­liber­ately be­come less pre­dictable as a defense.

Emo­tion­ally in­tel­li­gent robots/​elec­tronic friends could be­come benev­olent psy­chopaths. They will be (hope­fully) benev­olent, or at least be de­signed to be. They will be more and more psy­cho­pathic (not in the usual “evil” sense, I em­pha­size) if they be­come bet­ter at un­der­stand­ing hu­mans. This is one pos­si­bil­ity for hu­mans to limit the power of their elec­tronic friends, out of an un­will­ing­ness to be mod­el­led as ma­chines in­stead of agents.