Possibility and Could-ness

This post is part of the Solu­tion to “Free Will”.
Fol­lowup to: Dis­solv­ing the Ques­tion, Causal­ity and Mo­ral Responsibility

Plan­ning out up­com­ing posts, it seems to me that I do, in fact, need to talk about the word could, as in, “But I could have de­cided not to res­cue that tod­dler from the burn­ing or­phan­age.”

Other­wise, I will set out to talk about Friendly AI, one of these days, and some­one will say: “But it’s a ma­chine; it can’t make choices, be­cause it couldn’t have done any­thing other than what it did.”

So let’s talk about this word, “could”. Can you play Ra­tion­al­ist’s Ta­boo against it? Can you talk about “could” with­out us­ing syn­onyms like “can” and “pos­si­ble”?

Let’s talk about this no­tion of “pos­si­bil­ity”. I can tell, to some de­gree, whether a world is ac­tual or not ac­tual; what does it mean for a world to be “pos­si­ble”?

I know what it means for there to be “three” ap­ples on a table. I can ver­ify that ex­per­i­men­tally, I know what state of the world cor­re­sponds it. What does it mean to say that there “could” have been four ap­ples, or “could not” have been four ap­ples? Can you tell me what state of the world cor­re­sponds to that, and how to ver­ify it? Can you do it with­out say­ing “could” or “pos­si­ble”?

I know what it means for you to res­cue a tod­dler from the or­phan­age. What does it mean for you to could-have-not done it? Can you de­scribe the cor­re­spond­ing state of the world with­out “could”, “pos­si­ble”, “choose”, “free”, “will”, “de­cide”, “can”, “able”, or “al­ter­na­tive”?

One last chance to take a stab at it, if you want to work out the an­swer for your­self...

Some of the first Ar­tifi­cial In­tel­li­gence sys­tems ever built, were triv­ially sim­ple plan­ners. You spec­ify the ini­tial state, and the goal state, and a set of ac­tions that map states onto states; then you search for a se­ries of ac­tions that takes the ini­tial state to the goal state.

Modern AI plan­ners are a hell of a lot more so­phis­ti­cated than this, but it’s amaz­ing how far you can get by un­der­stand­ing the sim­ple math of ev­ery­thing. There are a num­ber of sim­ple, ob­vi­ous strate­gies you can use on a prob­lem like this. All of the sim­ple strate­gies will fail on difficult prob­lems; but you can take a course in AI if you want to talk about that part.

There’s back­ward chain­ing: Search­ing back from the goal, to find a tree of states such that you know how to reach the goal from them. If you hap­pen upon the ini­tial state, you’re done.

There’s for­ward chain­ing: Search­ing for­ward from the start, to grow a tree of states such that you know how to reach them from the ini­tial state. If you hap­pen upon the goal state, you’re done.

Or if you want a slightly less sim­ple al­gorithm, you can start from both ends and meet in the mid­dle.

Let’s talk about the for­ward chain­ing al­gorithm for a mo­ment.

Here, the strat­egy is to keep an ever-grow­ing col­lec­tion of states that you know how to reach from the START state, via some se­quence of ac­tions and (chains of) con­se­quences. Call this col­lec­tion the “reach­able from START” states; or equiv­a­lently, la­bel all the states in the col­lec­tion “reach­able from START”. If this col­lec­tion ever swal­lows the GOAL state—if the GOAL state is ever la­beled “reach­able from START”—you have a plan.

“Reach­a­bil­ity” is a tran­si­tive prop­erty. If B is reach­able from A, and C is reach­able from B, then C is reach­able from A. If you know how to drive from San Jose to San Fran­cisco, and from San Fran­cisco to Berkeley, then you know a way to drive from San Jose to Berkeley. (It may not be the short­est way, but you know a way.)

If you’ve ever looked over a game-prob­lem and started col­lect­ing states you knew how to achieve—looked over a maze, and started col­lect­ing points you knew how to reach from START—then you know what “reach­a­bil­ity” feels like. It feels like, “I can get there.” You might or might not be able to get to the GOAL from San Fran­cisco—but at least you know you can get to San Fran­cisco.

You don’t ac­tu­ally run out and drive to San Fran­cisco. You’ll wait, and see if you can figure out how to get from San Fran­cisco to GOAL. But at least you could go to San Fran­cisco any time you wanted to.

(Why would you want to go to San Fran­cisco? If you figured out how to get from San Fran­cisco to GOAL, of course!)

Hu­man be­ings can­not search through mil­lions of pos­si­bil­ities one af­ter the other, like an AI al­gorithm. But—at least for now—we are of­ten much more clever about which pos­si­bil­ities we do search.

One of the things we do that cur­rent plan­ning al­gorithms don’t do (well), is rule out large classes of states us­ing ab­stract rea­son­ing. For ex­am­ple, let’s say that your goal (or cur­rent sub­goal) calls for you to cover at least one of these boards us­ing dom­ino 2-tiles.

Boards_3

The black square is a miss­ing cell; this leaves 24 cells to be cov­ered with 12 dom­inos.

You might just dive into the prob­lem, and start try­ing to cover the first board us­ing dom­inos—dis­cov­er­ing new classes of reach­able states:

Boarddive

How­ever, you will find af­ter a while that you can’t seem to reach a goal state. Should you move on to the sec­ond board, and ex­plore the space of what’s reach­able there?

But I wouldn’t bother with the sec­ond board ei­ther, if I were you. If you con­struct this col­or­ing of the boards:

Boardsparity

Then you can see that ev­ery dom­ino has to cover one grey and one yel­low square. And only the third board has equal num­bers of grey and yel­low squares. So no mat­ter how clever you are with the first and sec­ond board, it can’t be done.

With one fell swoop of cre­ative ab­stract rea­son­ing—we con­structed the col­or­ing, it was not given to us—we’ve cut down our search space by a fac­tor of three. We’ve rea­soned out that the reach­able states in­volv­ing dom­inos placed on the first and sec­ond board, will never in­clude a goal state.

Nat­u­rally, one char­ac­ter­is­tic that rules out whole classes of states in the search space, is if you can prove that the state it­self is phys­i­cally im­pos­si­ble. If you’re look­ing for a way to power your car with­out all that ex­pen­sive gasoline, it might seem like a brilli­ant idea to have a col­lec­tion of gears that would turn each other while also turn­ing the car’s wheels—a per­pet­ual mo­tion ma­chine of the first type. But be­cause it is a the­o­rem that this is im­pos­si­ble in clas­si­cal me­chan­ics, we know that ev­ery clever thing we can do with clas­si­cal gears will not suffice to build a per­pet­ual mo­tion ma­chine. It is as im­pos­si­ble as cov­er­ing the first board with clas­si­cal dom­inos. So it would make more sense to con­cen­trate on new bat­tery tech­nolo­gies in­stead.

Surely, what is phys­i­cally im­pos­si­ble can­not be “reach­able”… right? I mean, you would think...

Oh, yeah… about that free will thing.

So your brain has a plan­ning al­gorithm—not a de­liber­ate al­gorithm that you learned in school, but an in­stinc­tive plan­ning al­gorithm. For all the ob­vi­ous rea­sons, this al­gorithm keeps track of which states have known paths from the start point. I’ve termed this la­bel “reach­able”, but the way the al­gorithm feels from in­side, is that it just feels like you can do it. Like you could go there any time you wanted.

And what about ac­tions? They’re prim­i­tively la­beled as reach­able; all other reach­a­bil­ity is tran­si­tive from ac­tions by con­se­quences. You can throw a rock, and if you throw a rock it will break a win­dow, there­fore you can break a win­dow. If you couldn’t throw the rock, you wouldn’t be able to break the win­dow.

Don’t try to un­der­stand this in terms of how it feels to “be able to” throw a rock. Think of it in terms of a sim­ple AI plan­ning al­gorithm. Of course the al­gorithm has to treat the prim­i­tive ac­tions as prim­i­tively reach­able. Other­wise it will have no plan­ning space in which to search for paths through time.

And similarly, there’s an in­ter­nal al­gorith­mic la­bel for states that have been ruled out:

wor­ldS­tate.pos­si­ble == 0

So when peo­ple hear that the world is de­ter­minis­tic, they trans­late that into: “All ac­tions ex­cept one are im­pos­si­ble.” This seems to con­tra­dict their feel­ing of be­ing free to choose any ac­tion. The no­tion of physics fol­low­ing a sin­gle line, seems to con­tra­dict their per­cep­tion of a space of pos­si­ble plans to search through.

The rep­re­sen­ta­tions in our cog­ni­tive al­gorithms do not feel like rep­re­sen­ta­tions; they feel like the way the world is. If your mind con­structs a search space of states that would re­sult from the ini­tial state given var­i­ous ac­tions, it will feel like the search space is out there, like there are cer­tain pos­si­bil­ities.

We’ve pre­vi­ously dis­cussed how prob­a­bil­ity is in the mind. If you are un­cer­tain about whether a clas­si­cal coin has landed heads or tails, that is a fact about your state of mind, not a prop­erty of the coin. The coin it­self is ei­ther heads or tails. But peo­ple for­get this, and think that coin.prob­a­bil­ity == 0.5, which is the Mind Pro­jec­tion Fal­lacy: treat­ing prop­er­ties of the mind as if they were prop­er­ties of the ex­ter­nal world.

So I doubt it will come as any sur­prise to my longer-abid­ing read­ers, if I say that pos­si­bil­ity is also in the mind.

What con­crete state of the world—which quarks in which po­si­tions—cor­re­sponds to “There are three ap­ples on the table, and there could be four ap­ples on the table”? Hav­ing trou­ble an­swer­ing that? Next, say how that world-state is differ­ent from “There are three ap­ples on the table, and there couldn’t be four ap­ples on the table.” And then it’s even more trou­ble, if you try to de­scribe could-ness in a world in which there are no agents, just ap­ples and ta­bles. This is a Clue that could-ness and pos­si­bil­ity are in your map, not di­rectly in the ter­ri­tory.

What is could-ness, in a state of the world? What are can-ness and able-ness? They are what it feels like to have found a chain of ac­tions which, if you out­put them, would lead from your cur­rent state to the could-state.

But do not say, “I could achieve X”. Say rather, “I could reach state X by tak­ing ac­tion Y, if I wanted”. The key phrase is “if I wanted”. I could eat that ba­nana, if I wanted. I could step off that cliff there—if, for some rea­son, I wanted to.

Where does the want­ing come from? Don’t think in terms of what it feels like to want, or de­cide some­thing; try think­ing in terms of al­gorithms. For a search al­gorithm to out­put some par­tic­u­lar ac­tion—choose—it must first carry out a pro­cess where it as­sumes many pos­si­ble ac­tions as hav­ing been taken, and ex­trap­o­lates the con­se­quences of those ac­tions.

Per­haps this al­gorithm is “de­ter­minis­tic”, if you stand out­side Time to say it. But you can’t write a de­ci­sion al­gorithm that works by just di­rectly out­putting the only ac­tion it can pos­si­bly out­put. You can’t save on com­put­ing power that way. The al­gorithm has to as­sume many differ­ent pos­si­ble ac­tions as hav­ing been taken, and ex­trap­o­late their con­se­quences, and then choose an ac­tion whose con­se­quences match the goal. (Or choose the ac­tion whose prob­a­bil­is­tic con­se­quences rank high­est in the util­ity func­tion, etc. And not all plan­ning pro­cesses work by for­ward chain­ing, etc.)

You might imag­ine the de­ci­sion al­gorithm as say­ing: “Sup­pose the out­put of this al­gorithm were ac­tion A, then state X would fol­low. Sup­pose the out­put of this al­gorithm were ac­tion B, then state Y would fol­low.” This is the proper cash­ing-out of could, as in, “I could do ei­ther X or Y.” Hav­ing com­puted this, the al­gorithm can only then con­clude: “Y ranks above X in the Prefer­ence Order­ing. The out­put of this al­gorithm is there­fore B. Re­turn B.”

The al­gorithm, there­fore, can­not pro­duce an out­put with­out ex­trap­o­lat­ing the con­se­quences of it­self pro­duc­ing many differ­ent out­puts. All but one of the out­puts be­ing con­sid­ered is coun­ter­fac­tual; but which out­put is the fac­tual one can­not be known to the al­gorithm un­til it has finished run­ning.

A bit tan­gled, eh? No won­der hu­mans get con­fused about “free will”.

You could eat the ba­nana, if you wanted. And you could jump off a cliff, if you wanted. Th­ese state­ments are both true, though you are rather more likely to want one than the other.

You could even flatly say, “I could jump off a cliff” and re­gard this as true—if you con­strue could-ness ac­cord­ing to reach­a­bil­ity, and count ac­tions as prim­i­tively reach­able. But this does not challenge de­ter­minis­tic physics; you will ei­ther end up want­ing to jump, or not want­ing to jump.

The state­ment, “I could jump off the cliff, if I chose to” is en­tirely com­pat­i­ble with “It is phys­i­cally im­pos­si­ble that I will jump off that cliff”. It need only be phys­i­cally im­pos­si­ble for you to choose to jump off a cliff—not phys­i­cally im­pos­si­ble for any sim­ple rea­son, per­haps, just a com­plex fact about what your brain will and will not choose.

Defin­ing things ap­pro­pri­ately, you can even en­dorse both of the state­ments:

  • “I could jump off the cliff” is true from my point-of-view

  • “It is phys­i­cally im­pos­si­ble for me to jump off the cliff” is true for all ob­servers, in­clud­ing myself

How can this hap­pen? If all of an agent’s ac­tions are prim­i­tive-reach­able from that agent’s point-of-view, but the agent’s de­ci­sion al­gorithm is so con­sti­tuted as to never choose to jump off a cliff.

You could even say that “could” for an ac­tion is always defined rel­a­tive to the agent who takes that ac­tion, in which case I can si­mul­ta­neously make the fol­low­ing two state­ments:

  • NonSuici­dalGuy could jump off the cliff.

  • It is im­pos­si­ble that NonSuici­dalGuy will hit the ground.

If that sounds odd, well, no won­der peo­ple get con­fused about free will!

But you would have to be very care­ful to use a defi­ni­tion like that one con­sis­tently. “Could” has an­other closely re­lated mean­ing in which it refers to the pro­vi­sion of at least a small amount of prob­a­bil­ity. This feels similar, be­cause when you’re eval­u­at­ing ac­tions that you haven’t yet ruled out tak­ing, then you will as­sign at least a small prob­a­bil­ity to ac­tu­ally tak­ing those ac­tions—oth­er­wise you wouldn’t be in­ves­ti­gat­ing them. Yet “I could have a heart at­tack at any time” and “I could have a heart at­tack any time I wanted to” are not the same us­age of could, though they are con­fus­ingly similar.

You can only de­cide by go­ing through an in­ter­me­di­ate state where you do not yet know what you will de­cide. But the map is not the ter­ri­tory. It is not re­quired that the laws of physics be ran­dom about that which you do not know. In­deed, if you were to de­cide ran­domly, then you could scarcely be said to be in “con­trol”. To de­ter­mine your de­ci­sion, you need to be in a lawful world.

It is not re­quired that the lawful­ness of re­al­ity be dis­rupted at that point, where there are sev­eral things you could do if you wanted to do them; but you do not yet know their con­se­quences, or you have not finished eval­u­at­ing the con­se­quences; and so you do not yet know which thing you will choose to do.

A blank map does not cor­re­spond to a blank ter­ri­tory. Not even an ag­o­niz­ingly un­cer­tain map cor­re­sponds to an ag­o­niz­ingly un­cer­tain ter­ri­tory.

(Next in the free will solu­tion se­quence is “The Ul­ti­mate Source”, deal­ing with the in­tu­ition that we have some chooser-fac­ulty be­yond any par­tic­u­lar de­sire or rea­son. As always, the in­ter­ested reader is ad­vised to first con­sider this ques­tion on their own—why would it feel like we are more than the sum of our im­pulses?)