The Ultimate Source

This post is part of the Solu­tion to “Free Will”.
Fol­lowup to: Time­less Con­trol, Pos­si­bil­ity and Could-ness

Faced with a burn­ing or­phan­age, you pon­der your next ac­tion for long ag­o­niz­ing mo­ments, un­cer­tain of what you will do. Fi­nally, the thought of a burn­ing child over­comes your fear of fire, and you run into the build­ing and haul out a tod­dler.

There’s a strain of philos­o­phy which says that this sce­nario is not suffi­cient for what they call “free will”. It’s not enough for your thoughts, your ag­o­niz­ing, your fear and your em­pa­thy, to fi­nally give rise to a judg­ment. It’s not enough to be the source of your de­ci­sions.

No, you have to be the ul­ti­mate source of your de­ci­sions. If any­thing else in your past, such as the ini­tial con­di­tion of your brain, fully de­ter­mined your de­ci­sion, then clearly you did not.

But we already drew this di­a­gram:


As pre­vi­ously dis­cussed, the left-hand struc­ture is preferred, even given de­ter­minis­tic physics, be­cause it is more lo­cal; and be­cause it is not pos­si­ble to com­pute the Fu­ture with­out com­put­ing the Pre­sent as an in­ter­me­di­ate.

So it is proper to say, “If-coun­ter­fac­tual the past changed and the pre­sent re­mained the same, the fu­ture would re­main the same,” but not to say, “If the past re­mained the same and the pre­sent changed, the fu­ture would re­main the same.”

Are you the true source of your de­ci­sion to run into the burn­ing or­phan­age? What if your par­ents once told you that it was right for peo­ple to help one an­other? What if it were the case that, if your par­ents hadn’t told you so, you wouldn’t have run into the burn­ing or­phan­age? Doesn’t that mean that your par­ents made the de­ci­sion for you to run into the burn­ing or­phan­age, rather than you?

On sev­eral grounds, no:

If it were coun­ter­fac­tu­ally the case that your par­ents hadn’t raised you to be good, then it would coun­ter­fac­tu­ally be the case that a differ­ent per­son would stand in front of the burn­ing or­phan­age. It would be a differ­ent per­son who ar­rived at a differ­ent de­ci­sion. And how can you be any­one other than your­self? Your par­ents may have helped pluck you out of Pla­tonic per­son-space to stand in front of the or­phan­age, but is that the same as con­trol­ling the de­ci­sion of your point in Pla­tonic per­son-space?

Or: If we imag­ine that your par­ents had raised you differ­ently, and yet some­how, ex­actly the same brain had ended up stand­ing in front of the or­phan­age, then the same ac­tion would have re­sulted. Your pre­sent self and brain, screens off the in­fluence of your par­ents—this is true even if the past fully de­ter­mines the fu­ture.

But above all: There is no sin­gle true cause of an event. Causal­ity pro­ceeds in di­rected acyclic net­works. I see no good way, within the mod­ern un­der­stand­ing of causal­ity, to trans­late the idea that an event must have a sin­gle cause. Every as­ter­oid large enough to reach Earth’s sur­face could have pre­vented the as­sas­si­na­tion of John F. Kennedy, if it had been in the right place to strike Lee Har­vey Oswald. There can be any num­ber of prior events, which if they had coun­ter­fac­tu­ally oc­curred differ­ently, would have changed the pre­sent. After spend­ing even a small amount of time work­ing with the di­rected acyclic graphs of causal­ity, the idea that a de­ci­sion can only have a sin­gle true source, sounds just plain odd.

So there is no con­tra­dic­tion be­tween “My de­ci­sion caused me to run into the burn­ing or­phan­age”, “My up­bring­ing caused me to run into the burn­ing or­phan­age”, “Nat­u­ral se­lec­tion built me in such fash­ion that I ran into the burn­ing or­phan­age”, and so on. Events have long causal his­to­ries, not sin­gle true causes.

Know­ing the in­tu­itions be­hind “free will”, we can con­struct other in­tu­ition pumps. The feel­ing of free­dom comes from the com­bi­na­tion of not know­ing which de­ci­sion you’ll make, and of hav­ing the op­tions la­beled as prim­i­tively reach­able in your plan­ning al­gorithm. So if we wanted to pump some­one’s in­tu­ition against the ar­gu­ment “Read­ing su­per­hero comics as a child, is the true source of your de­ci­sion to res­cue those tod­dlers”, we re­ply:

“But even if you vi­su­al­ize Bat­man run­ning into the burn­ing build­ing, you might not im­me­di­ately know which choice you’ll make (stan­dard source of feel­ing free); and you could still take ei­ther ac­tion if you wanted to (note cor­rectly phrased coun­ter­fac­tual and ap­peal to prim­i­tive reach­a­bil­ity). The comic-book au­thors didn’t vi­su­al­ize this ex­act sce­nario or its ex­act con­se­quences; they didn’t ag­o­nize about it (they didn’t run the de­ci­sion al­gorithm you’re run­ning). So the comic-book au­thors did not make this de­ci­sion for you. Though they may have con­tributed to it be­ing you who stands be­fore the burn­ing or­phan­age and chooses, rather than some­one else.”

How could any­one pos­si­bly be­lieve that they are the ul­ti­mate and only source of their ac­tions? Do they think they have no past?

If we, for a mo­ment, for­get that we know all this that we know, we can see what a be­liever in “ul­ti­mate free will” might say to the comic-book ar­gu­ment: “Yes, I read comic books as a kid, but the comic books didn’t reach into my brain and force me to run into the or­phan­age. Other peo­ple read comic books and don’t be­come more heroic. I chose it.”

Let’s say that you’re con­fronting some com­pli­cated moral dilemma that, un­like a burn­ing or­phan­age, gives you some time to ag­o­nize—say, thirty min­utes; that ought to be enough time.

You might find, look­ing over each fac­tor one by one, that none of them seem perfectly de­ci­sive—to force a de­ci­sion en­tirely on their own.

You might in­cor­rectly con­clude that if no one fac­tor is de­ci­sive, all of them to­gether can’t be de­ci­sive, and that there’s some ex­tra perfectly de­ci­sive thing that is your free will.

Look­ing back on your de­ci­sion to run into a burn­ing or­phan­age, you might rea­son, “But I could have stayed out of that or­phan­age, if I’d needed to run into the build­ing next door in or­der to pre­vent a nu­clear war. Clearly, burn­ing or­phan­ages don’t com­pel me to en­ter them. There­fore, I must have made an ex­tra choice to al­low my em­pa­thy with chil­dren to gov­ern my ac­tions. My na­ture does not com­mand me, un­less I choose to let it do so.”

Well, yes, your em­pa­thy with chil­dren could have been over­rid­den by your de­sire to pre­vent nu­clear war, if (coun­ter­fac­tual) that had been at stake.

This is ac­tu­ally a hand-vs.-fingers con­fu­sion; all of the fac­tors in your de­ci­sion, plus the dy­nam­ics gov­ern­ing their com­bi­na­tion, are your will. But if you don’t re­al­ize this, then it will seem like no in­di­vi­d­ual part of your­self has “con­trol” of you, from which you will in­cor­rectly con­clude that there is some­thing be­yond their sum that is the ul­ti­mate source of con­trol.

But this is like rea­son­ing that if no sin­gle neu­ron in your brain could con­trol your choice in spite of ev­ery other neu­ron, then all your neu­rons to­gether must not con­trol your choice ei­ther.

When­ever you re­flect, and fo­cus your whole at­ten­tion down upon a sin­gle part of your­self, it will seem that the part does not make your de­ci­sion, that it is not you, be­cause the you-that-sees could choose to over­ride it (it is a prim­i­tively reach­able op­tion). But when all of the parts of your­self that you see, and all the parts that you do not see, are added up to­gether, they are you; they are even that which re­flects upon it­self.

So now we have the in­tu­itions that:

  • The sen­sa­tion of the prim­i­tive reach­a­bil­ity of ac­tions, is in­com­pat­i­ble with their phys­i­cal determinism

  • A de­ci­sion can only have a sin­gle “true” source; what is de­ter­mined by the past can­not be de­ter­mined by the present

  • If no sin­gle psy­cholog­i­cal fac­tor you can see is perfectly re­spon­si­ble, then there must be an ad­di­tional some­thing that is perfectly responsible

  • When you re­flect upon any sin­gle fac­tor of your de­ci­sion, you see that you could over­ride it, and this “you” is the ex­tra ad­di­tional some­thing that is perfectly responsible

The com­bi­na­tion of these in­tu­itions has led philos­o­phy into strange veins in­deed.

I once saw one such vein de­scribed neatly in terms of “Author” con­trol and “Author*” con­trol, though I can’t seem to find or look up the pa­per.

Con­sider the con­trol that an Author has over the char­ac­ters in their books. Say, the sort of con­trol that I have over Bren­nan.

By an act of will, I can make Bren­nan de­cide to step off a cliff. I can also, by an act of will, con­trol Bren­nan’s in­ner na­ture; I can make him more or less heroic, em­pathic, kindly, wise, an­gry, or sor­rowful. I can even make Bren­nan stupi­der, or smarter up to the limits of my own in­tel­li­gence. I am en­tirely re­spon­si­ble for Bren­nan’s past, both the good parts and the bad parts; I de­cided ev­ery­thing that would hap­pen to him, over the course of his whole life.

So you might think that hav­ing Author-like con­trol over our­selves—which we ob­vi­ously don’t—would at least be suffi­cient for free will.

But wait! Why did I de­cide that Bren­nan would de­cide to join the Bayesian Con­spir­acy? Well, it is in char­ac­ter for Bren­nan to do so, at that stage of his life. But if this had not been true of Bren­nan, I would have cho­sen a differ­ent char­ac­ter that would join the Bayesian Con­spir­acy, be­cause I wanted to write about the beisut­sukai. Could I have cho­sen not to want to write about the Bayesian Con­spir­acy?

To have Author* self-con­trol is not only have con­trol over your en­tire ex­is­tence and past, but to have ini­tially writ­ten your en­tire ex­is­tence and past, with­out hav­ing been pre­vi­ously in­fluenced by it—the way that I in­vented Bren­nan’s life with­out hav­ing pre­vi­ously lived it. To choose your­self into ex­is­tence this way, would be Author* con­trol. (If I re­mem­ber the pa­per cor­rectly.)

Para­dox­i­cal? Yes, of course. The point of the pa­per was that Author* con­trol is what would be re­quired to be the “ul­ti­mate source of your own ac­tions”, the way some philoso­phers seemed to define it.

I don’t see how you could man­age Author* self-con­trol even with a time ma­chine.

I could write a story in which Jane went back in time and cre­ated her­self from raw atoms us­ing her knowl­edge of Ar­tifi­cial In­tel­li­gence, and then Jane over­saw and or­ches­trated her own en­tire child­hood up to the point she went back in time. Within the story, Jane would have con­trol over her ex­is­tence and past—but not with­out hav­ing been “pre­vi­ously” in­fluenced by them. And I, as an out­side au­thor, would have cho­sen which Jane went back in time and recre­ated her­self. If I needed Jane to be a bar­tender, she would be one.

Even in the un­likely event that, in real life, it is pos­si­ble to cre­ate closed timelike curves, and we find that a self-recre­at­ing Jane emerges from the time ma­chine with­out benefit of hu­man in­ter­ven­tion, that Jane still would not have Author* con­trol. She would not have writ­ten her own life with­out hav­ing been “pre­vi­ously” in­fluenced by it. She might pre­serve her per­son­al­ity; but would she have origi­nally cre­ated it? And you could stand out­side time and look at the cy­cle, and ask, “Why is this cy­cle here?” The an­swer to that would pre­sum­ably lie within the laws of physics, rather than Jane hav­ing writ­ten the laws of physics to cre­ate her­self.

And you run into ex­actly the same trou­ble, if you try to have your­self be the sole ul­ti­mate Author* source of even a sin­gle par­tic­u­lar de­ci­sion made by you—which is to say it was de­cided by your be­liefs, in­cul­cated morals, evolved emo­tions, etc. - which is to say your brain calcu­lated it—which is to say physics de­ter­mined it. You can’t have Author* con­trol over one sin­gle de­ci­sion, even with a time ma­chine.

So a philoso­pher would say: Either we don’t have free will, or free will doesn’t re­quire be­ing the sole ul­ti­mate Author* source of your own de­ci­sions, QED.

I have a some­what differ­ent per­spec­tive, and say: Your sen­sa­tion of freely choos­ing, clearly does not provide you with trust­wor­thy in­for­ma­tion to the effect that you are the ‘ul­ti­mate and only source’ of your own ac­tions. This be­ing the case, why at­tempt to in­ter­pret the sen­sa­tion as hav­ing such a mean­ing, and then say that the sen­sa­tion is false?

Surely, if we want to know which mean­ing to at­tach to a con­fus­ing sen­sa­tion, we should ask why the sen­sa­tion is there, and un­der what con­di­tions it is pre­sent or ab­sent.

Then I could say some­thing like: “This sen­sa­tion of free­dom oc­curs when I be­lieve that I can carry out, with­out in­terfer­ence, each of mul­ti­ple ac­tions, such that I do not yet know which of them I will take, but I am in the pro­cess of judg­ing their con­se­quences ac­cord­ing to my emo­tions and morals.”

This is a con­di­tion that can fail in the pres­ence of jail cells, or a de­ci­sion so over­whelm­ingly forced that I never per­ceived any un­cer­tainty about it.

There—now my sen­sa­tion of free­dom in­di­cates some­thing co­her­ent; and most of the time, I will have no rea­son to doubt the sen­sa­tion’s ve­rac­ity. I have no prob­lems about say­ing that I have “free will” ap­pro­pri­ately defined; so long as I am out of jail, un­cer­tain of my own fu­ture de­ci­sion, and liv­ing in a lawful uni­verse that gave me emo­tions and morals whose in­ter­ac­tion de­ter­mines my choices.

Cer­tainly I do not “lack free will” if that means I am in jail, or never un­cer­tain of my fu­ture de­ci­sions, or in a brain-state where my emo­tions and morals fail to de­ter­mine my ac­tions in the usual way.

Usu­ally I don’t talk about “free will” at all, of course! That would be ask­ing for trou­ble—no, beg­ging for trou­ble—since the other per­son doesn’t know about my re­defi­ni­tion. The phrase means far too many things to far too many peo­ple, and you could make a good case for toss­ing it out the win­dow.

But I gen­er­ally pre­fer to rein­ter­pret my sen­sa­tions sen­si­bly, as op­posed to re­fut­ing a con­fused in­ter­pre­ta­tion and then call­ing the sen­sa­tion “false”.