# Decisions are not about changing the world, they are about learning what world you live in

Epistemic sta­tus: Prob­a­bly dis­cussed to death in mul­ti­ple places, but peo­ple still make this mis­take all the time. I am not well versed in UDT, but it seems along the same lines. Or maybe I am rein­vent­ing some as­pects of Game The­ory.

We know that physics does not sup­port the idea of meta­phys­i­cal free will. By meta­phys­i­cal free will I mean the mag­i­cal abil­ity of agents to change the world by just mak­ing a de­ci­sion to do so. To the best of our knowl­edge, we are all (prob­a­bil­is­tic) au­toma­tons who think them­selves as agents with free choices. A model com­pat­i­ble with the known laws of physics is that what we think of as mod­el­ing, pre­dict­ing and mak­ing choices is ac­tu­ally learn­ing which one of the pos­si­ble wor­lds we live in. Think of it as be­ing a pas­sen­ger in a car and see­ing new land­scapes all the time. The main differ­ence is that the car is in­visi­ble to us and we con­stantly up­date the map of the ex­pected land­scape based on what we see. We have a so­phis­ti­cated up­dat­ing and pre­dict­ing al­gorithm in­side, and it of­ten pro­duces ac­cu­rate guesses. We ex­pe­rience those as choices made. As if we were the ones in the driver’s seat, not just the pas­sen­gers.

Real­iz­ing that de­ci­sions are noth­ing but up­dates, that mak­ing a de­ci­sion is a sub­jec­tive ex­pe­rience of dis­cov­er­ing which of the pos­si­ble wor­lds is the ac­tual one im­me­di­ately adds clar­ity to a num­ber of de­ci­sion the­ory prob­lems. For ex­am­ple, if you ac­cept that you have no way to change the world, only to learn which of the pos­si­ble wor­lds you live in, then the New­comb’s prob­lem with a perfect pre­dic­tor be­comes triv­ial: there is no pos­si­ble world where a two-boxer wins. There are only two pos­si­ble wor­lds, one where you are a one-boxer who wins, and one where you are a two-boxer who loses. Mak­ing a de­ci­sion to ei­ther one-box or two-box is a sub­jec­tive ex­pe­rience of learn­ing what kind of a per­son are you, i.e. what world you live in.

This de­scrip­tion, while fit­ting the ob­ser­va­tions perfectly, is ex­tremely un­com­fortable emo­tion­ally. After all, what’s the point of mak­ing de­ci­sions if you are just a pas­sen­ger spin­ning a fake steer­ing wheel not at­tached to any ac­tual wheels? The an­swer is the usual com­pat­i­bil­ism one: we are com­pel­led to be­have as if we were mak­ing de­ci­sions by our built-in al­gorithm. The clas­sic quote from Am­brose Bierce ap­plies:

“There’s no free will,” says the philoso­pher; “To hang is most un­just.”
″There is no free will,” as­sents the officer; “We hang be­cause we must.”

So, while un­com­fortable emo­tion­ally, this model lets us make bet­ter de­ci­sions (the irony is not lost on me, but since “mak­ing a de­ci­sion” is noth­ing but an emo­tion­ally com­fortable ver­sion of “learn­ing what pos­si­ble world is ac­tual”, there is no con­tra­dic­tion).

An aside on quan­tum me­chan­ics. It fol­lows from the uni­tary evolu­tion of the quan­tum state, cou­pled with the Born rule for ob­ser­va­tion, that the world is only pre­dictable prob­a­bil­is­ti­cally at the quan­tum level, which, in our model of learn­ing about the world we live in, puts limits on how ac­cu­rate the world model can be. Other­wise the quan­tum na­ture of the uni­verse (or mul­ti­verse) has no bear­ing on the per­cep­tion of free will.

Let’s go through the ex­am­ples some of which are listed as the num­bered dilem­mas in a re­cent pa­per by Eliezer Yud­kowsky and Nate Soares, Func­tional de­ci­sion the­ory: A new the­ory of in­stru­men­tal ra­tio­nal­ity. From here on out we will re­fer to this pa­per as EYNS.

## Psy­cholog­i­cal Twin Pri­soner’s Dilemma

An agent and her twin must both choose to ei­ther “co­op­er­ate” or “defect.” If both co­op­er­ate, they each re­ceive $1,000,000. If both defect, they each re­ceive$1,000. If one co­op­er­ates and the other defects, the defec­tor gets $1,001,000 and the co­op­er­a­tor gets noth­ing. The agent and the twin know that they rea­son the same way, us­ing the same con­sid­er­a­tions to come to their con­clu­sions. How­ever, their de­ci­sions are causally in­de­pen­dent, made in sep­a­rate rooms with­out com­mu­ni­ca­tion. Should the agent co­op­er­ate with her twin? First we enu­mer­ate all the pos­si­ble wor­lds, which in this case are just two, once we ig­nore the mean­ingless ver­bal fluff like “their de­ci­sions are causally in­de­pen­dent, made in sep­a­rate rooms with­out com­mu­ni­ca­tion.” This sen­tence adds zero in­for­ma­tion, be­cause the “agent and the twin know that they rea­son the same way”, so there is no way for them to make differ­ent de­ci­sions. Th­ese wor­lds are 1. Co­op­er­ate world:$1,000,000

The driver will have no way to en­force this af­ter they ar­rive, but she does have an ex­traor­di­nary abil­ity to de­tect lies with 99% ac­cu­racy. Be­ing left to die causes the agent to lose the equiv­a­lent of $1,000,000. In the case where the agent gets to the city, should she pro­ceed to visit the ATM and pay the driver? We note a miss­ing piece in the prob­lem state­ment: what are the odds of the agent ly­ing about not pay­ing and the driver de­tect­ing the lie and giv­ing a ride, any­way? It can be, for ex­am­ple, 0% (the driver does not bother to use her lie de­tec­tor in this case) or the same 99% ac­cu­racy as in the case where the agent lies about pay­ing. We as­sume the first case for this prob­lem, as this is what makes more sense in­tu­itively. As usual, we draw pos­si­ble wor­lds, par­ti­tioned by the “de­ci­sion” made by the hitch­hiker and note the util­ity of each pos­si­ble world. We do not know which world would be the ac­tual one for the hitch­hiker un­til we ob­serve it (“we” in this case might de­note the agent them­selves, even though they feel like they are mak­ing a de­ci­sion). So, while the high­est util­ity world is where the agent does not pay and the driver be­lieves they would, the odds of this pos­si­ble world be­ing ac­tual are very low, and the agent who will end up pay­ing af­ter the trip has higher ex­pected util­ity be­fore the trip. This is pretty con­fus­ing, be­cause the in­tu­itive CDT ap­proach would be to promise to pay, yet re­fuse af­ter. This is effec­tively thwarted by the driver’s lie de­tec­tor. Note that if the lie de­tec­tor was perfect, then there would be just two pos­si­ble wor­lds: 1. pay and sur­vive, 2. do not pay and die. Once the pos­si­ble wor­lds are writ­ten down, it be­comes clear that the prob­lem is es­sen­tially iso­mor­phic to New­comb’s. Another prob­lem that is iso­mor­phic to it is ## The Trans­par­ent New­comb Problem Events tran­spire as they do in New­comb’s prob­lem, ex­cept that this time both boxes are trans­par­ent — so the agent can see ex­actly what de­ci­sion the pre­dic­tor made be­fore mak­ing her own de­ci­sion. The pre­dic­tor placed$1,000,000 in box B iff she pre­dicted that the agent would leave be­hind box A (which con­tains $1,000) upon see­ing that both boxes are full. In the case where the agent faces two full boxes, should she leave the$1,000 be­hind?

Once you are used to enu­mer­at­ing pos­si­ble wor­lds, whether the boxes are trans­par­ent or not, does not mat­ter. The de­ci­sion whether to take one box or two already made be­fore the boxes are pre­sented, trans­par­ent or not. The anal­y­sis of the con­ceiv­able wor­lds is iden­ti­cal to the origi­nal New­comb’s prob­lem. To clar­ify, if you are in the world where you see two full boxes, wouldn’t it make sense to two-box? Well, yes, it would, but if this is what you “de­cide” to do (and all de­ci­sions are made in ad­vance, as far as the pre­dic­tor is con­cerned, even if the agent is not aware of this), you will never (or very rarely, if the pre­dic­tor is al­most, but not fully in­fal­lible) find your­self in this world. Con­versely, if you one-box even if you see two full boxes, that situ­a­tion is always, or al­most always hap­pens.

If you think you pre-com­mit­ted to one-box­ing but then are ca­pa­ble of two box­ing, con­grat­u­la­tions! You are in the rare world where you have suc­cess­fully fooled the pre­dic­tor!

From this anal­y­sis it be­comes clear that the word “trans­par­ent” is yet an­other su­perflu­ous stipu­la­tion, as it con­tains no new in­for­ma­tion. Two-box­ers will two-box, one-box­ers will one-box, trans­parency or not.

At this point it is worth point­ing out the differ­ence be­tween world count­ing and EDT, CDT and FDT. The lat­ter three tend to get mired in rea­son­ing about their own rea­son­ing, in­stead of rea­son­ing about the prob­lem they are try­ing to de­cide. In con­trast, we mind­lessly eval­u­ate prob­a­bil­ity-weighted util­ities, un­con­cerned with the pit­falls of causal­ity, retro-causal­ity, coun­ter­fac­tu­als, counter-pos­si­bil­ities, sub­junc­tive de­pen­dence and other hy­po­thet­i­cal epicy­cles. There are only re­cur­sion-free pos­si­ble wor­lds of differ­ent prob­a­bil­ities and util­ities, and a sin­gle ac­tual world ob­served af­ter ev­ery­thing is said and done. While rea­son­ing about rea­son­ing is clearly ex­tremely im­por­tant in the field of AI re­search, the dilem­mas pre­sented in EYNS do not re­quire any­thing as in­volved. Sim­ple count­ing does the trick bet­ter.

The next prob­lem is rather con­fus­ing in its origi­nal pre­sen­ta­tion.

## The Cos­mic Ray Problem

An agent must choose whether to take $1 or$100. With van­ish­ingly small prob­a­bil­ity, a cos­mic ray will cause her to do the op­po­site of what she would have done oth­er­wise. If she learns that she has been af­fected by a cos­mic ray in this way, she will need to go to the hos­pi­tal and pay $1,000 for a check-up. Should she take the$1, or the $100? A bit of clar­ifi­ca­tion is in or­der be­fore we pro­ceed. What does “do the op­po­site of what she would have done oth­er­wise” mean, op­er­a­tionally?. Here let us in­ter­pret it in the fol­low­ing way: De­cid­ing and at­tempt­ing to do X, but end­ing up do­ing the op­po­site of X and re­al­iz­ing it af­ter the fact. Some­thing like “OK, let me take$100… Oops, how come I took $1 in­stead? I must have been struck by a cos­mic ray, gotta do the$1000 check-up!”

Another point is that here again there are two prob­a­bil­ities in play, the odds of tak­ing $1 while in­tend­ing to take$100 and the odds of tak­ing $100 while in­tend­ing to take$1. We as­sume these are the same, and de­note the (small) prob­a­bil­ity of a cos­mic ray strike as p.

The anal­y­sis of the dilemma is bor­ingly similar to the pre­vi­ous ones:

I know whether or not you have ter­mites, and I have sent you this let­ter iff ex­actly one of the fol­low­ing is true: (i) the ru­mor is false, and you are go­ing to pay me $1,000 upon re­ceiv­ing this let­ter; or (ii) the ru­mor is true, and you will not pay me upon re­ceiv­ing this let­ter. The pre­dic­tor then pre­dicts what the agent would do upon re­ceiv­ing the let­ter, and sends the agent the let­ter iff ex­actly one of (i) or (ii) is true. 13 Thus, the claim made by the let­ter is true. As­sume the agent re­ceives the let­ter. Should she pay up? The prob­lem is called “black­mail” be­cause those sus­cep­ti­ble to pay­ing the ran­som re­ceive the let­ter when their house doesn’t have ter­mites, while those who are not sus­cep­ti­ble do not. The pre­dic­tor has no in­fluence on the in­fes­ta­tion, only on who re­ceives the let­ter. So, by pre-com­mit­ting to not pay­ing, one avoids the black­mail and if they re­ceive the let­ter, it is ba­si­cally an ad­vanced no­tifi­ca­tion of the in­fes­ta­tion, noth­ing more. EYNS states “the ra­tio­nal move is to re­fuse to pay” as­sum­ing the agent re­ceives the let­ter. This ten­ta­tively as­sumes that the agent has a choice in the mat­ter once the let­ter is re­ceived. This turns the prob­lem on its head and gives the agent a coun­ter­in­tu­itive op­tion of hav­ing to de­cide whether to pay af­ter the let­ter has been re­ceived, as op­posed to an­a­lyz­ing the prob­lem in ad­vance (and pre­com­mit­ting to not pay­ing, thus pre­vent­ing the let­ter from be­ing sent, if you are the sort of per­son who be­lieves in choice). The pos­si­ble wor­lds anal­y­sis of the prob­lem is as fol­lows. Let’s as­sume that the prob­a­bil­ity of hav­ing ter­mites is p, the greedy pre­dic­tor is perfect, and the let­ter is sent to ev­ery­one “el­i­gible”, i.e. to ev­ery­one with an in­fes­ta­tion who would not pay, and to ev­ery­one with­out the in­fes­ta­tion who would pay upon re­ceiv­ing the let­ter. We fur­ther as­sume that there are no para­noid agents, those who would pay “just in case” even when not re­ceiv­ing the let­ter. In gen­eral, this case would have to be con­sid­ered as a sep­a­rate world. Now the anal­y­sis is quite rou­tine: Thus not pay­ing is, not sur­pris­ingly, always bet­ter than pay­ing, by the “black­mail amount” 1,000(1-p). One thing to note is that the case of where the would-pay agent has ter­mites, but does not re­ceive a let­ter is easy to over­look, since it does not in­clude re­ceiv­ing a let­ter from the pre­dic­tor. How­ever, this is a pos­si­ble world con­tribut­ing to the over­all util­ity, if it is not ex­plic­itly stated in the prob­lem. Other dilem­mas that yield to a straight­for­ward anal­y­sis by world enu­mer­a­tion are Death in Da­m­as­cus, reg­u­lar and with a ran­dom coin, the Me­chan­i­cal Black­mail and the Psy­chopath But­ton. One fi­nal point that I would like to ad­dress is that treat­ing the ap­par­ent de­ci­sion mak­ing as a self- and world-dis­cov­ery pro­cess, not as an at­tempt to change the world, helps one an­a­lyze ad­ver­sar­ial se­tups that stump the de­ci­sion the­o­ries that as­sume free will. # Im­mu­nity from Ad­ver­sar­ial Predictors EYNS states in Sec­tion 9: “There is no perfect de­ci­sion the­ory for all pos­si­ble sce­nar­ios, but there may be a gen­eral-pur­pose de­ci­sion the­ory that matches or out­performs all ri­vals in fair dilem­mas, if a satis­fac­tory no­tion of “fair­ness” can be for­mal­ized.” and later “There are some im­me­di­ate tech­ni­cal ob­sta­cles to pre­cisely ar­tic­u­lat­ing this no­tion of fair­ness. Imag­ine I have a copy of Fiona, and I pun­ish any­one who takes the same ac­tion as the copy. Fiona will always lose at this game, whereas Carl and Eve might win. In­tu­itively, this prob­lem is un­fair to Fiona, and we should com­pare her perfor­mance to Carl’s not on the “act differ­ently from Fiona” game, but on the analo­gous “act differ­ently from Carl” game. It re­mains un­clear how to trans­form a prob­lem that’s un­fair to one de­ci­sion the­ory into an analo­gous one that is un­fair to a differ­ent one (if an ana­log ex­ists) in a rea­son­ably prin­ci­pled and gen­eral way.” I note here that sim­ply enu­mer­at­ing pos­si­ble wor­lds evades this prob­lem as far as I can tell. Let’s con­sider a sim­ple “un­fair” prob­lem: If the agent is pre­dicted to use a cer­tain de­ci­sion the­ory DT1, she gets noth­ing, and if she is pre­dicted to use some other ap­proach (DT2), she gets$100. There are two pos­si­ble wor­lds here, one where the agent uses DT1, and the other where she uses DT2:

So a prin­ci­pled agent who always uses DT1 is pe­nal­ized. Sup­pose an­other time the agent might face the op­po­site situ­a­tion, where she is pun­ished for fol­low­ing DT2 in­stead of DT1. What is the poor agent to do, be­ing stuck be­tween Scylla and Charyb­dis? There are 4 pos­si­ble wor­lds in this case:

1. Agent uses DT1 always

2. Agent uses DT2 always

3. Agent uses DT1 when re­warded for us­ing DT1 and DT2 when re­warded for us­ing DT2

4. Agent uses DT1 when pun­ished for us­ing DT1 and DT2 when pun­ished for us­ing DT2

The world num­ber 3 is where a the agent wins, re­gard­less of how ad­ver­sar­ial or “un­fair” the pre­dic­tor is try­ing to be to her. Enu­mer­at­ing pos­si­ble wor­lds lets us crys­tal­lize the type of an agent that would always get max­i­mum pos­si­ble pay­off, no mat­ter what. Such an agent would sub­jec­tively feel that they are ex­cel­lent at mak­ing de­ci­sions, whereas they sim­ply live in the world where they hap­pen to win.

No nominations.
No reviews.
• We know that physics does not sup­port the idea of meta­phys­i­cal free will. By meta­phys­i­cal free will I mean the mag­i­cal abil­ity of agents to change the world by just mak­ing a de­ci­sion to do so.

Ac­cord­ing to my un­der­stand­ing of the or­di­nary, ev­ery­day, non-mag­i­cal mean­ings of the words “de­cide”, “act”, “change”, etc., we do these things all the time. So do au­tonomous ve­hi­cles, for that mat­ter. So do cats and dogs. In­ten­tion, choice, and steer­ing the world into de­sired con­figu­ra­tions are what we do, as do some of our ma­chines.

It is strange that peo­ple are so ready to deny these things to peo­ple, when they never make the same ar­gu­ments about ma­chines. In­stead, for ex­am­ple, they want to know what a driver­less car saw and de­cided when it crashed, or protest that en­g­ine con­trol soft­ware de­tected when it was un­der test and tuned the en­g­ine to mis­lead­ingly pass the emis­sions crite­ria. And of course there is a whole math­e­mat­i­cal field called “de­ci­sion the­ory”. It’s about de­ci­sions.

After all, what’s the point of mak­ing de­ci­sions if you are just a pas­sen­ger spin­ning a fake steer­ing wheel not at­tached to any ac­tual wheels?

The simile con­tra­dicts your ar­gu­ment, which im­plies that there is no such thing as a steer­ing wheel. But there is. Real steer­ing wheels, that the real driver of a real car uses to re­ally steer it. Are the de­sign­ers and man­u­fac­tur­ers of steer­ing wheels wast­ing their efforts?

The an­swer is the usual com­pat­i­bil­ism one: we are com­pel­led to be­have as if we were mak­ing de­ci­sions by our built-in al­gorithm.

Now that’s magic — to sup­pose that our be­liefs are ab­solutely ground­less, yet some com­pel­ling force main­tains them in al­ign­ment with re­al­ity.

• Ac­cord­ing to my un­der­stand­ing of the or­di­nary, ev­ery­day, non-mag­i­cal mean­ings of the words “de­cide”, “act”, “change”, etc., we do these things all the time.

We per­ceive the world as if we were in­ten­tion­ally do­ing them, yes. But there is no “top-down cau­sa­tion” in physics that sup­ports this view. And our per­spec­tive on agency de­pends on how much we know about the “agent”: the more we know, the less agenty the en­tity feels. It’s a known phe­nomenon. I men­tioned it be­fore a cou­ple of times, in­clud­ing here and on my blog.

• “The sage is one with cau­sa­tion.”

The same ar­gu­ment that “we” do not “do” things, also shows that there is no such thing as a jumbo jet, no such thing as a car, not even any such thing as an atom; that noth­ing made of parts ex­ists. We thought pro­tons were el­e­men­tary par­ti­cles, un­til we dis­cov­ered quarks. But no: ac­cord­ing to this view “we” did not “think” any­thing, be­cause “we” do not ex­ist and we do not “think”. No­body and noth­ing ex­ists.

All that such an ar­gu­ment does is re­define the words “thing” and “ex­ist” in ways that no-one has ever used them and no-one ever con­sis­tently could. It fails to ac­count for the fact that the con­cepts work.

You say that agency is bugs and un­cer­tainty, that its per­cep­tion is an illu­sion stem­ming from ig­no­rance; I say that agency is con­trol sys­tems, a real thing that can be ex­per­i­men­tally de­tected in both liv­ing or­ganisms and some ma­chines, and de­tected to be ab­sent in other things.

• The same ar­gu­ment that “we” do not “do” things, also shows that there is no such thing as a jumbo jet, no such thing as a car, not even any such thing as an atom; that noth­ing made of parts ex­ists.

and

It fails to ac­count for the fact that the con­cepts work.

Ac­tu­ally, us­ing the con­cepts that work is the whole point of my posts on LW, as op­posed to us­ing the con­cepts that feel right. I dis­like the terms like “ex­ist” as point­ing to some ob­jec­tive re­al­ity, and this is where I part ways with Eliezer. To me it is “mod­els all the way down.” Here is an­other post on this topic from a few years back: Math­e­mat­ics as a lossy com­pres­sion al­gorithm gone wild. Once you con­sciously re­place “true” with “use­ful” and “ex­ist” with “use­fully mod­eled as,” a lot of con­fu­sion over what ex­ists and what does not, what is true and what is false, what is knowl­edge and what it be­lief, what is ob­jec­tive and what is sub­jec­tive, sim­ply melts away. In this vein, it is very much use­ful to model a car as a car, not as a tran­sient spike in quan­tum fields. In the same vein, it is use­ful to model the elec­tron scat­ter­ing through dou­ble slits as a tran­sient spike in quan­tum fields, and not as a tiny ping-ping ball that can some­times turn into a wave.

I say that agency is con­trol sys­tems, a real thing that can be ex­per­i­men­tally de­tected in both liv­ing or­ganisms and some ma­chines, and de­tected to be ab­sent in other things.

I agree that a lot of agent-look­ing be­hav­ior can be use­fully mod­eled as a multi-level con­trol sys­tem, and, if any­thing, this is not done enough in biol­ogy, neu­ro­science or ap­plied philos­o­phy, if the lat­ter is even a thing. By the same to­ken, the con­trol sys­tem ap­proach is a use­ful ab­strac­tion for many ob­served phe­nom­ena, liv­ing or oth­er­wise, not just agents. It does not lay claim to what an agent is, just what ap­proach can be used to de­scribe some agenty be­hav­iors. I see ab­solutely no con­tra­dic­tion with what I said here or el­se­where.

Maybe one way to sum­ma­rize my point in this post is that mod­el­ing the de­ci­sions as learn­ing about one­self and the world is more use­ful for mak­ing good de­ci­sions that mod­el­ing an agent as chang­ing the world with her de­ci­sions.

• Ac­tu­ally, us­ing the con­cepts that work is the whole point of my posts on LW, as op­posed to us­ing the con­cepts that feel right.

It seems to me that the con­cepts “jumbo jet”, “car”, and “atom” all work. If they “feel right”, it is be­cause they work. “Feel­ing right” is not some free-float­ing at­tribute to be be­stowed at will on this or that.

A tel­ling phrase in the post you linked is “for some rea­son”:

In yet other words, a good ap­prox­i­ma­tion is, for some rea­son, some­times also a good ex­trap­o­la­tion.

Un­less you can ex­pand on that “some rea­son”, this is just push­ing un­der the car­pet the fact that cer­tain things work spec­tac­u­larly well, and leav­ing Wigner’s ques­tion unan­swered.

Maybe one way to sum­ma­rize my point in this post is that mod­el­ing the de­ci­sions as learn­ing about one­self and the world is more use­ful for mak­ing good de­ci­sions that mod­el­ing an agent as chang­ing the world with her de­ci­sions.

Thought and ac­tion are two differ­ent things, as differ­ent as a raven and a writ­ing desk.

• Will only re­ply to one part, to high­light our ba­sic (on­tolog­i­cal?) differ­ences:

Thought and ac­tion are two differ­ent things, as differ­ent as a raven and a writ­ing desk.

A thought is a phys­i­cal pro­cess in the brain, which is a part of the uni­verse. An ac­tion is also a phys­i­cal pro­cess in the uni­verse, so it is very much like a thought, only more visi­ble to those with­out pre­dic­tive pow­ers.

• If choice and coun­ter­fac­tu­als ex­ist, then an ac­tion is some­thing that can af­fect the fu­ture, while a thought is not. Of course, that differ­ence no longer ap­plies if your on­tol­ogy doesn’t fea­ture choices and coun­te­fac­tu­als...

Will only re­ply to one part, to high­light our ba­sic (on­tolog­i­cal?) differ­ences:

What your on­tol­ogy should be is “noth­ing” or “mu”. You are not keep­ing up to your com­mit­ments.

• We seem to have very differ­ent on­tolo­gies here, and not con­verg­ing. Also, tel­ling me what my on­tol­ogy “should” be is less than helpful :) It helps to reach mu­tual un­der­stand­ing be­fore giv­ing pre­scrip­tions to the other per­son. As­sum­ing you are in­ter­ested in more un­der­stand­ing, and less pre­scribing, let me try again to ex­plain what I mean.

If choice and coun­ter­fac­tu­als ex­ist, then an ac­tion is some­thing that can af­fect the fu­ture, while a thought is not. Of course, that differ­ence no longer ap­plies if your on­tol­ogy doesn’t fea­ture choices and coun­te­fac­tu­als…

In the view I am de­scribing here “choice” is one of the qualia, a pro­cess in the brain. Coun­ter­fac­tu­als is an­other, re­lated, quale, the feel­ing of pos­si­bil­ities. Claiming any­thing more is a mind pro­jec­tion fal­lacy. The men­tal model of the world changes with time. I am not even claiming that time passes, just that there is a men­tal model of the uni­verse, in­clud­ing the coun­ter­fac­tu­als, for each mo­ment in the ob­server’s time. I pre­fer the term “ob­server” to agent, since it does not im­ply hav­ing a choice, only watch­ing the world (as rep­re­sented by the ob­server’s men­tal model) un­fold.

• .We seem to have very differ­ent on­tolo­gies here,

And very differ­ent episte­molo­gies. I am not deny­ing the very pos­si­bil­ity of know­ing things about re­al­ity.

and not con­verg­ing. Also, tel­ling me what my on­tol­ogy “should” be is less than helpful :) It helps to reach mu­tual un­der­stand­ing be­fore giv­ing pre­scrip­tions to the other per­son.

All I am do­ing is tak­ing you at your word.

You keep say­ing that it is mod­els all the way down, and there is no way to make true claims about re­al­ity. If I am not to take those com­ments liter­ally, how am I to take them? How am I to guess the cor­rect non-literal in­ter­pre­ta­tion, out of the many pos­si­ble ones.?

In the view I am de­scribing here “choice” is one of the qualia, a pro­cess in the brain. Coun­ter­fac­tu­als is an­other, re­lated, quale, the feel­ing of pos­si­bil­ities. Claiming any­thing more is a mind pro­jec­tion fal­lacy.

That’s an im­plicit claim about re­al­ity. Some­thing can only be a a mind pro­jec­tion if there is noth­ing in re­al­ity cor­re­spond­ing to it. It is not suffi­cient to say that it is in the head or the model, it also has to not be in the ter­ri­tory, or else it is a true be­lief, not a mind pro­jec­tion.. To say that some­thing doesn’t ex­ist in re­al­ity is to make a claim about re­al­ity as much as to say that some­thing does.

The men­tal model of the world changes with time. I am not even claiming that time passes, just that there is a men­tal model of the uni­verse, in­clud­ing the coun­ter­fac­tu­als, for each mo­ment in the ob­server’s time.

Again “in the model” does not im­ply “not in the ter­ri­tory”.

• I dis­like the terms like “ex­ist” as point­ing to some ob­jec­tive re­al­ity,

You seem happy enough with “not ex­ist” as in “agents, coun­ter­fac­tu­als and choices don’t ex­ist”

Once you con­sciously re­place “true” with “use­ful” and “ex­ist” with “use­fully mod­eled as,” a lot of con­fu­sion over what ex­ists and and what does not, what is true and what is false, what is knowl­edge and what it be­lief, what is ob­jec­tive and what is sub­jec­tive, sim­ply melts away.

If it is re­ally pos­si­ble for an agent to af­fect the fu­ture or street them­selves into al­ter­na­tive fu­tures, then there is a lot of po­ten­tial util­ity in it, in that you can end up in a higher-util­ity fu­ture than you would oth­er­wise have. OTOH, if there are no coun­ter­fac­tu­als, then what­ever util­ity you gain is pre­de­ter­mined. So one can­not as­sess the use­ful­ness, in the sense of util­ity gain, of mod­els, in a way in­de­pen­dent of the meta­physics of de­ter­minism and coun­ter­fac­tu­als. What is use­ful, and how use­ful is, de­pends on what is true.

I agree that a lot of agent-look­ing be­hav­ior can be use­fully mod­eled as a multi-level con­trol sys­tem, and, if any­thing, this is not done enough in biol­ogy, neu­ro­science or ap­plied philos­o­phy, if the lat­ter is even a thing. By the same to­ken, the con­trol sys­tem ap­proach is a use­ful ab­strac­tion for many ob­served phe­nom­ena, liv­ing or oth­er­wise, not just agents. It does not lay claim to what an agent is, just what ap­proach can be used to de­scribe some agenty be­hav­iors. I see ab­solutely no con­tra­dic­tion with what I said here or el­se­where.

It con­tra­dicts the “agents don’t ex­ist thing” and the “I never talk about ex­is­tence thing”. If you only ob­jec­tive to re­duc­tively in­ex­pli­ca­ble agents, that would be bet­ter ex­pressed as “there is noth­ing nonre­duc­tive”.

Although that still wouldn’t help you come to the con­clu­sion that there is no choice and no coun­ter­fac­tu­als, be­cause that is much more about de­ter­minism than re­duc­tion­ism.

• If it is re­ally pos­si­ble for an agent to af­fect the fu­ture or street them­selves into al­ter­na­tive fu­tures, then there is a lot of po­ten­tial util­ity in it, in that you can end up in a higher-util­ity fu­ture than you would oth­er­wise have. OTOH, if there are no coun­ter­fac­tu­als, then what­ever util­ity you gain is pre­de­ter­mined.

Yep, some pos­si­ble wor­lds have more util­ity for a given agent than oth­ers. And, yes, sort of. What­ever util­ity you gain is not your free choice, and not nec­es­sar­ily pre­de­ter­mined, just not un­der your con­trol. You are a mere ob­server who thinks they can change the world.

It con­tra­dicts the “agents don’t ex­ist thing” and the “I never talk about ex­is­tence thing”.

I don’t see how. Seems there is an in­fer­en­tial gap there we haven’t bridged.

• What­ever util­ity you gain is not your free choice

That’s a state­ment about the world. Care to jus­tify it?

• Once you con­sciously re­place “true” with “use­ful” and “ex­ist” with “use­fully mod­eled as,” a lot of con­fu­sion over what ex­ists and what does not, what is true and what is false, what is knowl­edge and what it be­lief, what is ob­jec­tive and what is sub­jec­tive, sim­ply melts away.

How do you know that the peo­ple who say “agents ex­ist” don’t mean “some sys­tems can be use­fully mod­el­led as agents”?

By the same to­ken, the con­trol sys­tem ap­proach is a use­ful ab­strac­tion for many ob­served phe­nom­ena, liv­ing or oth­er­wise, not just agents. It does not lay claim to what an agent is, just what ap­proach can be used to de­scribe some agenty be­hav­iors. I see ab­solutely no con­tra­dic­tion with what I said here or el­se­where.

You are mak­ing a claim about re­al­ity, that coun­ter­fac­tu­als don’t ex­ist., even though you are also mak­ing a meta claim that you don’t make claims about re­al­ity.

If prob­a­b­lis­tic agents[], and coun­ter­fac­tu­als are both use­ful mod­els (and I don’t see how you can con­sis­tentlt as­sert the former and deny the lat­ter) then coun­ter­fac­tu­als “ex­ist” by your* lights.

[*] Or au­toma­ton, if you pre­fer. If some­one builds a soft­ware gismo that is prob­a­b­lis­tic and acts with­out spe­cific in­struc­tion, then it is an agetn and an au­toma­ton all at the same time.

• But there is no “top-down cau­sa­tion” in physics that sup­ports this view.

There is no full strength top-down de­ter­minism, but sys­tems-level be­havi­our is enough to sup­port a com­mon-sense view of de­ci­sion mak­ing.

• I agree, the ap­par­ent emer­gent high-level struc­tures look awfully like agents. That in­ten­tional stance tends to dis­si­pate once we un­der­stand them more.

• If in­ten­tion­al­ity just mean seek­ing to pur­sue or max­imise some goal, there is no rea­son an ar­tifi­cial sys­tem should not have it. But the an­swer is differ­ent if in­ten­tion­al­ity means hav­ing a ghost or ho­muncu­lus in­side. And nei­ther is the same as the is­sue of whether an agent is de­ter­minis­tic , or ca­pa­ble of chang­ing the fu­ture.

More pre­ci­sion is needed.

• Even when the agent has more com­pute than we do? I con­tinue to take the in­ten­tional stance to­wards agents I un­der­stand but can’t com­pute, like MCTS-based chess play­ers.

• What do you mean by tak­ing the in­ten­tional stance in this case?

• I would model the pro­gram as a thing that is op­ti­miz­ing for a goal. While I might know some­thing about the pro­gram’s weak­nesses, I pri­mar­ily model it as a thing that se­lects good chess moves. Espe­cially if it is a bet­ter chess player than I am.

• This seems to cut through a lot of con­fu­sion pre­sent in de­ci­sion the­ory, so I guess the ob­vi­ous ques­tion to ask is why don’t we already work things this way in­stead of the way they are nor­mally ap­proached in de­ci­sion the­ory?

• To the ex­tent that this ap­proach is a de­ci­sion the­ory, it is some var­i­ant of UDT (see this ex­pla­na­tion). The prob­lems with ap­ply­ing and for­mal­iz­ing it are the usual prob­lems with ap­ply­ing and for­mal­iz­ing UDT:

• How do you con­struct “policy coun­ter­fac­tu­als”, e.g. wor­lds where “I am the type of per­son who one-boxes” and “I am the type of per­son who two-boxes”? (This isn’t a prob­lem if the en­vi­ron­ment is already speci­fied as a func­tion from the agent’s policy to out­come, but that of­ten isn’t how things work in the real world)

• How do you in­te­grate this with log­i­cal un­cer­tainty, such that you can e.g. con­struct “pos­si­ble wor­lds” where the 1000th digit of pi is 2 (when in fact it isn’t)? If you don’t do this then you get wrong an­swers on ver­sions of these prob­lems that use log­i­cal pseu­do­ran­dom­ness rather than phys­i­cal ran­dom­ness.

• How does this be­have in multi-agent prob­lems, with other ver­sions of it­self that have differ­ent util­ity func­tions? Naively both agents would try to di­ag­o­nal­ize against each other, and an in­finite loop would re­sult.

• Those are ex­cel­lent ques­tions! Thank you for ac­tu­ally ask­ing them, in­stead of sim­ply stat­ing some­thing like “What you wrote is wrong be­cause...”

Let me try to have a crack at them, with­out claiming that “I have solved de­ci­sion the­ory, ev­ery­one can go home now!”

How do you con­struct “policy coun­ter­fac­tu­als”, e.g. wor­lds where “I am the type of per­son who one-boxes” and “I am the type of per­son who two-boxes”? (This isn’t a prob­lem if the en­vi­ron­ment is already speci­fied as a func­tion from the agent’s policy to out­come, but that of­ten isn’t how things work in the real world)

“I am a one-boxer” and “I am a two-boxer” are both pos­si­ble wor­lds, and by watch­ing your­self work through the prob­lem you learn in which world you live. Maybe I mi­s­un­der­stand what you are say­ing though.

How do you in­te­grate this with log­i­cal un­cer­tainty, such that you can e.g. con­struct “pos­si­ble wor­lds” where the 1000th digit of pi is 2 (when in fact it isn’t)? If you don’t do this then you get wrong an­swers on ver­sions of these prob­lems that use log­i­cal pseu­do­ran­dom­ness rather than phys­i­cal ran­dom­ness.

As of this mo­ment, both are pos­si­ble wor­lds for me. If I were to look up or calcu­late the 1000th digit of Pi, I would learn a bit more about the world I am in. Not in­clud­ing the lower-prob­a­bil­ity wor­lds like hav­ing calcu­lat­ing the re­sult wrongly and so on. Or I might choose not to look it up, and both wor­lds would re­main pos­si­ble un­til and un­less I gain, in­ten­tion­ally or ac­ci­den­tally (there is no differ­ence, in­ten­tions and ac­ci­dents are not a phys­i­cal thing, but a hu­man ab­strac­tion at the level of in­ten­tional stance), some knowl­edge about the burn­ing ques­tion of the 1000th digit of Pi.

Can you give an ex­am­ple of a prob­lem “that uses log­i­cal pseu­do­ran­dom­ness” where sim­ply enu­mer­at­ing wor­lds would give a wrong an­swer?

How does this be­have in multi-agent prob­lems, with other ver­sions of it­self that have differ­ent util­ity func­tions? Naively both agents would try to di­ag­o­nal­ize against each other, and an in­finite loop would re­sult.

I am not sure in what way an agent that has a differ­ent util­ity func­tion is at all your­self. An ex­am­ple would be good. My guess is that you might be refer­ring to a Nash equil­ibrium that is a mixed strat­egy, but maybe I am wrong.

• “I am a one-boxer” and “I am a two-boxer” are both pos­si­ble wor­lds, and by watch­ing your­self work through the prob­lem you learn in which world you live. Maybe I mi­s­un­der­stand what you are say­ing though.

The in­ter­est­ing for­mal ques­tion here is: given a de­scrip­tion of the world you are in (like the de­scrip­tions in this post), how do you enu­mer­ate the pos­si­ble wor­lds? A solu­tion to this prob­lem would be very use­ful for de­ci­sion the­ory.

If an agent knows its source code, then “I am a one-boxer” and “I am a two-boxer” could be taken to re­fer to cur­rently-un­known log­i­cal facts about what its source code out­puts. You could be propos­ing a de­ci­sion the­ory whereby the agent uses some method for rea­son­ing about log­i­cal un­cer­tainty (such as enu­mer­at­ing log­i­cal wor­lds), and se­lects the ac­tion such that its ex­pected util­ity is high­est con­di­tional on the event that its source code out­puts this ac­tion. (I am not ac­tu­ally sure ex­actly what you are propos­ing, this is just a guess).

If the log­i­cal un­cer­tainty is rep­re­sented by a log­i­cal in­duc­tor, then this de­ci­sion the­ory is called “LIEDT” (log­i­cal in­duc­tor EDT) at MIRI, and it has a few prob­lems, as ex­plained in this post. First, log­i­cal in­duc­tors have un­defined be­hav­ior when con­di­tion­ing on very rare events (this is similar to the cos­mic ray prob­lem). Se­cond, it isn’t up­date­less in the right way (see the re­ply to the next point for more on this prob­lem).

I’m not claiming that it’s im­pos­si­ble to solve the prob­lems by world-enu­mer­a­tion, just that for­mally spec­i­fy­ing the world-enu­mer­a­tion pro­ce­dure is an open prob­lem.

Can you give an ex­am­ple of a prob­lem “that uses log­i­cal pseu­do­ran­dom­ness” where sim­ply enu­mer­at­ing wor­lds would give a wrong an­swer?

Say you’re be­ing coun­ter­fac­tu­ally mugged based on the 1000th digit of pi. Omega, be­fore know­ing the 1000th digit of pi, pre­dicts whether you would pay up if the 1000th digit of pi is odd (note: it’s ac­tu­ally even), and re­wards you if the digit is even. You now know that the digit is odd and are con­sid­er­ing pay­ing up.

Since you know the 1000th digit, you know the world where the 1000th digit of pi is even is im­pos­si­ble. A dumber ver­sion of you could con­sider the 1000th digit of pi to be un­cer­tain, but does this dumber ver­sion of you have enough com­pu­ta­tional abil­ity to an­a­lyze the prob­lem prop­erly and come to the right an­swer? How does this dumber ver­sion rea­son cor­rectly about the prob­lem while never find­ing out the value of the 1000th digit of pi? Again, I’m not claiming this is im­pos­si­ble, just that it’s an open prob­lem.

I am not sure in what way an agent that has a differ­ent util­ity func­tion is at all your­self. An ex­am­ple would be good.

Con­sider the fol­low­ing nor­mal-form game. Each of 2 play­ers se­lects an ac­tion, 0 or 1. Call their ac­tions x1 and x2. Now player 1 gets util­ity 9*x2-x1, and player 2 gets util­ity 10*x1-x2. (This is an asym­met­ric var­i­ant of pris­oner’s dilemma; I’m mak­ing it asym­met­ric on pur­pose to avoid a triv­ial solu­tion)

Call your de­ci­sion the­ory WEDT (“world-enu­mer­a­tion de­ci­sion the­ory”). What hap­pens when two WEDT agents play this game with each other? They have differ­ent util­ity func­tions but the same de­ci­sion the­ory. If both try to enu­mer­ate wor­lds, then they end up in an in­finite loop (player 1 is think­ing about what hap­pens if they se­lect ac­tion 0, which re­quires simu­lat­ing player 2, but that causes player 2 to think about what hap­pens if they se­lect ac­tion 0, which re­quires simu­lat­ing player 1, etc).

• Thank you for your pa­tience ex­plain­ing the cur­rent lead­ing edge and an­swer­ing my ques­tions! Let me try to see if my un­der­stand­ing of what you are say­ing makes sense.

If an agent knows its source code, then “I am a one-boxer” and “I am a two-boxer” could be taken to re­fer to cur­rently-un­known log­i­cal facts about what its source code out­puts.

By “source code” I as­sume you mean the al­gorithm that com­pletely de­ter­mines the agent’s ac­tions for a known set of in­puts, though maybe calcu­lat­ing these ac­tions is ex­pen­sive, hence some of them could be “cur­rently un­known” un­til the al­gorithm is ei­ther an­a­lyzed or simu­lated.

Con­sider the fol­low­ing nor­mal-form game. Each of 2 play­ers se­lects an ac­tion, 0 or 1. Call their ac­tions x1 and x2. Now player 1 gets util­ity 9*x2-x1, and player 2 gets util­ity 10*x1-x2.

...

If both try to enu­mer­ate wor­lds, then they end up in an in­finite loop

Enu­mer­at­ing does not re­quire simu­lat­ing. It is de­scrip­tive, not pre­scrip­tive. So there are 4 pos­si­ble wor­lds, 00, 01, 10 and 11, with re­wards for player 1 be­ing 0, 9, −1, 8, and for player 2 be­ing 0, −1, 10, 9. But to as­sign prior prob­a­bil­ities to these wor­lds, we need to dis­cover more about the play­ers. For pure strat­egy play­ers some of these wor­lds will be prob­a­bil­ity 1 and oth­ers 0. For mixed strat­egy play­ers things get slightly more in­ter­est­ing, since the wor­lds are pa­ram­e­ter­ized by prob­a­bil­ity:

Let’s sup­pose that player 1 picks each world with prob­a­bil­ities p and 1-p and player 2 with prob­a­bil­ities q and 1-q. Then the prob­a­bil­ities of each world are pq, p(1-q), (1-p)q and (1-p)(1-q). Then the ex­pected util­ity for each world is for player 1: 0, 9p(1-q), -(1-p)q, 8(1-p)(1-q), and for player 2 0, -p(1-q), 10(1-p)q, 9(1-p)(1-q). Out of the in­finitely many pos­si­ble wor­lds there will be one with the Nash equil­ibrium, where each player is in­differ­ent to which de­ci­sion the other player ends up mak­ing. This is, again, purely de­scrip­tive. By learn­ing more about what strat­egy the agents use, we can eval­u­ate the ex­pected util­ity for each one, and, af­ter the game is played, whether once or re­peat­edly, learn more about the world the play­ers live in. The ques­tion you posed

What hap­pens when two WEDT agents play this game with each other?

is in ten­sion with the whole idea of agents not be­ing able to af­fect the world, only be­ing able to learn about the world it lives in. There is no such thing as a WEDT agent. If one of the play­ers is the type that does the anal­y­sis and picks the mixed strat­egy with the Nash equil­ibrium, they max­i­mize their ex­pected util­ity, re­gard­less of what that type of an agent the other player is.

Say you’re be­ing coun­ter­fac­tu­ally mugged based on the 1000th digit of pi. Omega, be­fore know­ing the 1000th digit of pi, pre­dicts whether you would pay up if the 1000th digit of pi is odd (note: it’s ac­tu­ally even), and re­wards you if the digit is even. You now know that the digit is odd and are con­sid­er­ing pay­ing up.
Since you know the 1000th digit, you know the world where the 1000th digit of pi is even is im­pos­si­ble.

I am miss­ing some­thing… The whole setup is un­clear. Coun­ter­fac­tual mug­ging is a triv­ial prob­lem in terms of world enu­mer­a­tion, an agent who does not pay lives in the world where she has higher util­ity. It does not mat­ter what Omega says or does, or what the 1000th digit of pi is.

You could be propos­ing a de­ci­sion the­ory whereby the agent uses some method for rea­son­ing about log­i­cal un­cer­tainty (such as enu­mer­at­ing log­i­cal wor­lds), and se­lects the ac­tion such that its ex­pected util­ity is high­est con­di­tional on the event that its source code out­puts this ac­tion. (I am not ac­tu­ally sure ex­actly what you are propos­ing, this is just a guess).

Maybe this is where the in­fer­en­tial gap lies? I am not propos­ing a de­ci­sion the­ory. Abil­ity to make de­ci­sions re­quires free­dom of choice, mag­i­cally af­fect­ing the world through un­phys­i­cal top-down cau­sa­tion. I am sim­ply ob­serv­ing which of the many pos­si­ble wor­lds has what util­ity for a given ob­server.

• OK, I mis­in­ter­preted you as recom­mend­ing a way of mak­ing de­ci­sions. It seems that we are in­ter­ested in differ­ent prob­lems (as I am try­ing to find al­gorithms for mak­ing de­ci­sions that have good perfor­mance in a va­ri­ety of pos­si­ble prob­lems).

Re top down cau­sa­tion: I am cu­ri­ous what you think of a view where there are both high and low level de­scrip­tions that can be true at the same time, and have their own par­allel causal­ities that are con­sis­tent with each other. Say that at the low level, the state type is and the tran­si­tion func­tion is . At the high level, the state type is and the non­de­ter­minis­tic tran­si­tion func­tion is , i.e. at a high-level some­times you don’t know what state things will end up in. Say we have some func­tion for map­ping low-level states to high-level states, so each low-level state cor­re­sponds to a sin­gle high-level state, but a sin­gle high-level state may cor­re­spond to mul­ti­ple low-level states.

Given these defi­ni­tions, we could say that the high and low level on­tolo­gies are com­pat­i­ble if, for each low level state , it is the case that , i.e. the high-level on­tol­ogy’s pre­dic­tion for the next high-level state is con­sis­tent with the pre­dicted next high-level state ac­cord­ing to the low-level on­tol­ogy and .

Cau­sa­tion here is par­allel and sym­met­ri­cal rather than top-down: both the high level and the low level obey causal laws, and there is no cau­sa­tion from the high level to the low level. In cases where things can be made con­sis­tent like this, I’m pretty com­fortable say­ing that the high-level states are “real” in an im­por­tant sense, and that high-level states can have other high-level states as a cause.

EDIT: re­gard­ing more minor points: Thanks for the ex­pla­na­tion of the multi-agent games; that makes sense al­though in this case the enu­mer­ated wor­lds are fairly low-fidelity, and mak­ing them higher-fidelity might lead to in­finite loops. In coun­ter­fac­tual mug­ging, you have to be able to enu­mer­ate both the world where the 1000th digit of pi is even and where the 1000th digit of pi is odd, and if you are do­ing log­i­cal in­fer­ence on each of these wor­lds then that might be hard; con­sider the difficulty of imag­in­ing a pos­si­ble world where 1+1=3.

• OK, I mis­in­ter­preted you as recom­mend­ing a way of mak­ing de­ci­sions. It seems that we are in­ter­ested in differ­ent prob­lems (as I am try­ing to find al­gorithms for mak­ing de­ci­sions that have good perfor­mance in a va­ri­ety of pos­si­ble prob­lems).

Right. I would also be in­ter­ested in the al­gorithms for mak­ing de­ci­sions if I be­lieved we were agents with free will, free­dom of choice, abil­ity to af­fect the world (in the model where the world is ex­ter­nal re­al­ity) and so on.

what you think of a view where there are both high and low level de­scrip­tions that can be true at the same time, and have their own par­allel causal­ities that are con­sis­tent with each other.

Ab­solutely, once you re­place “true” with “use­ful” :) We can have mul­ti­ple mod­els at differ­ent lev­els that make ac­cu­rate pre­dic­tions of fu­ture ob­ser­va­tions. I as­sume that in your no­ta­tion tl:L→L is an en­do­mor­phism within a set of microstates L, and th:H→Set(H) is a map from a macrostate type H, (what would be an ex­am­ple of this state type?) to a wider set of macrostates (like what?). I am guess­ing that this may match up with the stan­dard defi­ni­tions of microstates and macrostates in statis­ti­cal me­chan­ics, and pos­si­bly some kind of a statis­ti­cal en­sem­ble? Any­way, so your state­ment is that of emer­gence: the evolu­tion of microstates maps into an evolu­tion of macrostates, sort of like the laws of statis­ti­cal me­chan­ics map into the laws of ther­mo­dy­nam­ics. In physics it is known as an effec­tive the­ory. If so, I have no is­sue with that. Cer­tainly one can call, say, gas com­pres­sion by an ex­ter­nal force as a cause of it ab­sorb­ing me­chan­i­cal en­ergy and heat­ing up. In the same sense, one can talk about emer­gent laws of hu­man be­hav­ior, where a de­ci­sion by an agent is a cause of change in the world the agent in­hab­its. So, a de­ci­sion the­ory is an emer­gent effec­tive the­ory where we don’t try to go to the level of states L, be those at the level of sin­gle neu­rons, neu­ronal elec­tro­chem­istry, ion chan­nels open­ing and clos­ing ac­cord­ing to some quan­tum chem­istry and atomic physics, or even lower. This seems to be a fla­vor of com­pat­i­bil­ism.

What I have an is­sue with is the ap­par­ent break of the L→H map­ping when one pos­tu­lates top-down cau­sa­tion, like free choice, i.e. mul­ti­ple differ­ent H’s reach­able from the same microstate.

in this case the enu­mer­ated wor­lds are fairly low-fidelity

I am con­fused about the low/​high-fidelity. In what way what I sug­gested is low-fidelity? What is miss­ing from the pic­ture?

con­sider the difficulty of imag­in­ing a pos­si­ble world where 1+1=3.

Why would it be difficult? A pos­si­ble world is about the ob­server’s men­tal model, and most mod­els do not map neatly into any L or H that matches known phys­i­cal laws. Most mag­i­cal think­ing is like that (e.g. faith, OCD, free will).

• I would also be in­ter­ested in the al­gorithms for mak­ing de­ci­sions if I be­lieved we were agents with free will, free­dom of choice, abil­ity to af­fect the world (in the model where the world is ex­ter­nal re­al­ity) and so on.

My guess is that you, in prac­tice, ac­tu­ally are in­ter­ested in find­ing de­ci­sion-rele­vant in­for­ma­tion and rele­vant ad­vice, in ev­ery­day de­ci­sions that you make. I could be wrong but that seems re­ally un­likely.

Re microstates/​macrostates: it seems like we mostly agree about microstates/​macrostates. I do think that any par­tic­u­lar microstate can only lead to one macrostate.

I am con­fused about the low/​​high-fidelity.

By “low-fidelity” I mean the de­scrip­tion of each pos­si­ble world doesn’t con­tain a com­plete de­scrip­tion of the pos­si­ble wor­lds that the other agent enu­mer­ates. (This ac­tu­ally has to be the case in sin­gle-per­son prob­lems too, oth­er­wise each pos­si­ble world would have to con­tain a de­scrip­tion of ev­ery other pos­si­ble world)

Why would it be difficult?

An is­sue with imag­in­ing a pos­si­ble world where 1+1=3 is that it’s not clear in what or­der to make log­i­cal in­fer­ences. If you make a cer­tain se­quence of log­i­cal in­fer­ences with the ax­iom 1+1=3, then you get 2=1+1=3; if you make a differ­ence se­quence of in­fer­ences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this rea­son, logic is not the right set­ting in which to for­mal­ize log­i­cally im­pos­si­ble coun­ter­fac­tu­als, and tak­ing coun­ter­fac­tu­als on log­i­cal state­ments is con­fused in one way or an­other)

If we fix a par­tic­u­lar men­tal model of this world, then we can an­swer ques­tions about this model; part of the de­ci­sion the­ory prob­lem is de­cid­ing what the men­tal model of this world should be, and that is pretty un­clear.

• My guess is that you, in prac­tice, ac­tu­ally are in­ter­ested in find­ing de­ci­sion-rele­vant in­for­ma­tion and rele­vant ad­vice, in ev­ery­day de­ci­sions that you make. I could be wrong but that seems re­ally un­likely.

Yes, if course I do, I can­not help it. But just be­cause we do some­thing doesn’t mean we have the free will to ei­ther do or not do it.

I do think that any par­tic­u­lar microstate can only lead to one macrostate.

Right, I can­not imag­ine it be­ing oth­er­wise, and that is where my beef with “agents have free­dom of choice” is.

An is­sue with imag­in­ing a pos­si­ble world where 1+1=3 is that it’s not clear in what or­der to make log­i­cal in­fer­ences. If you make a cer­tain se­quence of log­i­cal in­fer­ences with the ax­iom 1+1=3, then you get 2=1+1=3; if you make a differ­ence se­quence of in­fer­ences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4.

Since pos­si­ble wor­lds are in the ob­server’s mind (ob­vi­ously, since math is a men­tal con­struc­tion to be­gin with, no mat­ter how much peo­ple keeps ar­gu­ing whether math­e­mat­i­cal laws are in­vented or dis­cov­ered), differ­ent peo­ple may make a sub­op­ti­mal in­fer­ence in differ­ent places. We call those “mis­takes”. Most times peo­ple don’t ex­plic­itly use ax­ioms, though some­times they do. Some ax­ioms are more use­ful than oth­ers, of course. Start­ing with 1+1=3 in ad­di­tion to the usual re­main­ing set, we can prove that all num­bers are equal. Or maybe we end up with a math­e­mat­i­cal model where adding odd num­bers only leads to odd num­bers. In that sense, not know­ing more about the world, we are in­deed in a “low-fidelity” situ­a­tion, with many pos­si­ble (micro-)wor­lds where 1+1=3 is an ax­iom. Some of these wor­lds might even have a use­ful de­scrip­tion of ob­ser­va­tions (imag­ine, for ex­am­ple, one where each cou­ple re­quires a chap­er­one, there 1+1 is liter­ally 3).

• If we fix a par­tic­u­lar men­tal model of this world, then we can an­swer ques­tions about this model; part of the de­ci­sion the­ory prob­lem is de­cid­ing what the men­tal model of this world should be, and that is pretty un­clear.

In other words. use­ful­ness (which DT to use) de­pends on truth (Which world model to use).

• What I have an is­sue with is the ap­par­ent break of the L→H map­ping when one pos­tu­lates top-down cau­sa­tion, like free choice, i.e. mul­ti­ple differ­ent H’s reach­able from the same microstate.

If there is in­de­ter­minism at the micro level , there is not the slight­est doubt that it can be am­plified to the macro level, be­cause quan­tum me­chan­ics as an ex­per­i­men­tal sci­ence de­pends on the abil­ity to make macro­scopic records of events in­volv­ing sin­gle par­ti­cles.

• Am­plify­ing micro­scopic in­de­ter­minism is definitely a thing. It doesn’t help the free choice ar­gu­ment though, since the ob­server is not the one mak­ing the choice, the un­der­ly­ing quan­tum me­chan­ics does.

• Macro­scopic in­de­ter­minism is suffi­cient to es­tab­lish real, not merely log­i­cal, coun­ter­fac­tu­als.

Be­sides that, It would be helpful to sep­a­rate the ideas of du­al­ism , agency and free choice. If the per­son mak­ing the de­ci­sion is not some ghost in the ma­chine, then they the only thing they can be is the ma­chine, as a to­tal sys­tem,. In that case, the ques­tion be­comes the ques­tion of whether the sys­tem as a whole can choose, could have cho­sen oth­er­wise, etc.

But you’re in good com­pany: Sam Har­ris is similarly con­fused.

• But you’re in good com­pany: Sam Har­ris is similarly con­fused.

Not con­de­scend­ing in the least :P

There are no “real” coun­ter­fac­tu­als, only the mod­els in the ob­server’s mind, some even­tu­ally proven bet­ter re­flect­ing ob­ser­va­tions than oth­ers.

It would be helpful to sep­a­rate the ideas of du­al­ism , agency and free choice. If the per­son mak­ing the de­ci­sion is not some ghost in the ma­chine, then they the only thing they can be is the ma­chine, as a to­tal sys­tem,. In that case, the ques­tion be­comes the ques­tion of whether the sys­tem as a whole can choose, could have cho­sen oth­er­wise, etc.

It would be helpful, yes, if they were sep­a­rable. Free choice as any­thing other than illu­sion­ism is tan­ta­mount to du­al­ism.

• There are no “real” coun­ter­fac­tu­als, only the mod­els in the ob­server’s mind, some even­tu­ally proven bet­ter re­flect­ing ob­ser­va­tions than oth­ers.

You need to ar­gue for that claim, not just state it. The con­trary claim is sup­ported by a sim­ple ar­gu­ment: if an even is in­de­ter­minis­tic, it need not have hap­pened, or need not have hap­pened that way. There­fore, there is a real pos­si­bil­ity that it did not hap­pened, or hap­pened differ­ently—and that is a real coun­ter­fac­tual.

It would be helpful, yes, if they were sep­a­rable. Free choice as any­thing other than illu­sion­ism is tan­ta­mount to du­al­ism.

You need to ar­gue for that claim as well.

• if an even is in­de­ter­minis­tic, it need not have hap­pened, or need not have hap­pened that way

There is no such thing as “need” in Physics. There are phys­i­cal laws, de­ter­minis­tic or prob­a­bil­is­tic, and that’s it. “Need” is a hu­man con­cept that has no phys­i­cal coun­ter­part. Your “sim­ple ar­gu­ment” is an emo­tional re­ac­tion.

• Your com­ment has no rele­vance, be­cause prob­a­b­lis­tic laws au­to­mat­i­cally im­ply coun­ter­fac­tu­als as well. In fact it’s just an­other way of say­ing the same thing. I could have shown it in modal logic, too.

• Your com­ment has no rele­vance,

Well, we have reached an im­passe. Good­bye.

• Thank you, I am glad that I am not the only one for whom cau­sa­tion-free ap­proach to de­ci­sion the­ory makes sense. UDT seems a bit like that.

• I note here that sim­ply enu­mer­at­ing pos­si­ble wor­lds evades this prob­lem as far as I can tell.

The analo­gous un­fair de­ci­sion prob­lem would be “pun­ish the agent if they sim­ply enu­mer­ate pos­si­ble wor­lds and then choose the ac­tion that max­i­mizes their ex­pected pay­out”. Not call­ing some­thing a de­ci­sion the­ory doesn’t mean it isn’t one.

• Please pro­pose a mechanism by which you can make an agent who enu­mer­ates the wor­lds seen as pos­si­ble by ev­ery agent, no mat­ter what their de­ci­sion the­ory is, end up in a world with lower util­ity than some other agent.

• Say you have an agent A who fol­lows the world-enu­mer­at­ing al­gorithm out­lined in the post. Omega makes a perfect copy of A and pre­sents the copy with a red but­ton and a blue but­ton, while tel­ling it the fol­low­ing:

“I have pre­dicted in ad­vance which but­ton A will push. (Here is a de­scrip­tion of A; you are wel­come to pe­ruse it for as long as you like.) If you press the same but­ton as I pre­dicted A would push, you re­ceive noth­ing; if you push the other but­ton, I will give you $1,000,000. Re­fus­ing to push ei­ther but­ton is not an op­tion; if I pre­dict that you do not in­tend to push a but­ton, I will tor­ture you for 3^^^3 years.” The copy’s choice of but­ton is then noted, af­ter which the copy is ter­mi­nated. Omega then pre­sents the real agent fac­ing the prob­lem with the ex­act same sce­nario as the one faced by the copy. Your world-enu­mer­at­ing agent A will always fail to ob­tain the max­i­mum$1,000,000 re­ward ac­cessible in this prob­lem. How­ever, a sim­ple agent B who chooses ran­domly be­tween the red and blue but­tons has a 50% chance of ob­tain­ing this re­ward, for an ex­pected util­ity of $500,000. There­fore, A ends up in a world with lower ex­pected util­ity than B. Q.E.D. • Your sce­nario is some­what am­bigu­ous, but let me at­tempt to an­swer all ver­sions of it that I can see. First: does the copy of A (here­after, A′) know that it’s a copy? If yes, then the win­ning strat­egy is “red if I am A, blue if I am A′”. (Or the re­verse, of course; but whichever var­i­ant A se­lects, we can be sure that A′ se­lects the same one, be­ing a perfect copy and all.) If no, then in­deed A re­ceives noth­ing, but then of course this has noth­ing to do with any copies; it is sim­ply the same sce­nario as if Omega pre­dicted A’s choice, then gave A the money if A chose differ­ently than pre­dicted—which is, of course, im­pos­si­ble (Omega is a perfect pre­dic­tor), and thus this, in turn, is the same as “Omega shows up, doesn’t give A any money, and leaves”. Or is it? You claim that in the sce­nario where Omega gives the money iff A chooses oth­er­wise than pre­dicted, A could re­ceive the money with 50% prob­a­bil­ity by choos­ing ran­domly. But this re­quires us to re­assess the terms of the “Omega, a perfect pre­dic­tor” stipu­la­tion, as pre­vi­ously dis­cussed by cousin_it. In any case, un­til we’ve speci­fied just what kind of pre­dic­tor Omega is, and how its pre­dic­tive pow­ers in­ter­act with sources of (pseudo-)ran­dom­ness—as well as whether, and how, Omega’s be­hav­ior changes in situ­a­tions in­volv­ing ran­dom­ness—we can­not eval­u­ate sce­nar­ios such as the one you de­scribe. • dxu did not claim that A could re­ceive the money with 50% prob­a­bil­ity by choos­ing ran­domly. They claimed that a sim­ple agent B that chose ran­domly would re­ceive the money with 50% prob­a­bil­ity. The point is that Omega is only try­ing to pre­dict A, not B, so it doesn’t mat­ter how well Omega can pre­dict B’s ac­tions. The point can be made even more clear by in­tro­duc­ing an agent C that just does the op­po­site of what­ever A would do. Then C gets the money 100% of the time (un­less A gets tor­tured, in which case C also gets tor­tured). • This doesn’t make a whole lot of sense. Why, and on what ba­sis, are agents B and C re­ceiv­ing any money? Are you sug­gest­ing some sort of sce­nario where Omega gives A money iff A does the op­po­site of what Omega pre­dicted A would do, and then also gives any other agent (such as B or C) money iff said other agent does the op­po­site of what Omega pre­dicted A would do? This is a strange sce­nario (it seems to be very differ­ent from the sort of sce­nario one usu­ally en­coun­ters in such prob­lems), but sure, let’s con­sider it. My ques­tion is: how is it differ­ent from “Omega doesn’t give A any money, ever (due to a deep-seated per­sonal dis­like of A). Other agents may, or may not, get money, de­pend­ing on var­i­ous fac­tors (the de­tails of which are moot)”? This doesn’t seem to have much to do with de­ci­sion the­o­ries. Maybe shminux ought to rephrase his challenge. After all— Please pro­pose a mechanism by which you can make an agent who enu­mer­ates the wor­lds seen as pos­si­ble by ev­ery agent, no mat­ter what their de­ci­sion the­ory is, end up in a world with lower util­ity than some other agent. … can be satis­fied with “Omega punches A in the face, thus caus­ing A to end up with lower util­ity than B, who re­mains un-punched”. What this tells us about de­ci­sion the­o­ries, I can’t rightly see. • This is a strange sce­nario (it seems to be very differ­ent from the sort of sce­nario one usu­ally en­coun­ters in such prob­lems), but sure, let’s con­sider it. My ques­tion is: how is it differ­ent from “Omega doesn’t give A any money, ever (due to a deep-seated per­sonal dis­like of A). Other agents may, or may not, get money, de­pend­ing on var­i­ous fac­tors (the de­tails of which are moot)”? This doesn’t seem to have much to do with de­ci­sion the­o­ries. Yes, this is cor­rect, and is pre­cisely the point EYNS was try­ing to make when they said In­tu­itively, this prob­lem is un­fair to Fiona, and we should com­pare her perfor­mance to Carl’s not on the “act differ­ently from Fiona” game, but on the analo­gous “act differ­ently from Carl” game. “Omega doesn’t give A any money, ever (due to a deep-seated per­sonal dis­like of A)” is a sce­nario that does not de­pend on the de­ci­sion the­ory A uses, and hence is an in­tu­itively “un­fair” sce­nario to ex­am­ine; it tells us noth­ing about the qual­ity of the de­ci­sion the­ory A is us­ing, and there­fore is use­less to de­ci­sion the­o­rists. (How­ever, for­mal­iz­ing this in­tu­itive no­tion of “fair­ness” is difficult, which is why EYNS brought it up in the pa­per.) I’m not sure why shminux seems to think that his world-count­ing pro­ce­dure man­ages to avoid this kind of “un­fair” pun­ish­ment; the whole point of it is that it is un­fair, and hence un­avoid­able. There is no way for an agent to win if the prob­lem setup is bi­ased against them to start with, so I can only con­clude that shminux mi­s­un­der­stood what EYNS was try­ing to say when he (shminux) wrote I note here that sim­ply enu­mer­at­ing pos­si­ble wor­lds evades this prob­lem as far as I can tell. • I didn’t read shminux’s post as sug­gest­ing that his scheme al­lows an agent to avoid, say, be­ing punched in the face apro­pos of noth­ing. (And that’s what all the “un­fair” sce­nar­ios de­scribed in the com­ments here boil down to!) I think we can all agree that “ar­bi­trary face-punch­ing by an ad­ver­sary ca­pa­ble of punch­ing us in the face” is not some­thing we can avoid, no mat­ter our de­ci­sion the­ory, no mat­ter how we make choices, etc. • I am not sure how else to in­ter­pret the part of shminux’s post quoted by dxu. How do you in­ter­pret it? • can be satis­fied with “Omega punches A in the face, thus caus­ing A to end up with lower util­ity than B, who re­mains un-punched”. It seems to be a good sum­mary of what dxu and Da­cyn were sug­gest­ing! I think it pre­serves the salient fea­tures with­out all the fluff of copy­ing and de­stroy­ing, or hav­ing mul­ti­ple agents. Which makes it clear why the coun­terex­am­ple does not work: I said “the wor­lds seen as pos­si­ble by ev­ery agent, no mat­ter what their de­ci­sion the­ory is,” and the un­punched world is not a pos­si­ble one for the world enu­mer­a­tor in this setup. My point was that CDT makes a sub­op­ti­mal de­ci­sion in New­comb, and FDT strug­gles to pick the best de­ci­sion in some of the prob­lems, as well, be­cause it is lost in the for­est of causal trees, or at least this is my im­pres­sion from the EYNS pa­per. Once you stop wor­ry­ing about causal­ity and the agent’s abil­ity to change the world by their ac­tions, you end up with a sim­per ques­tion “what pos­si­ble world does this agent live in and with what prob­a­bil­ity?” • A mind-reader looks to see whether this is an agent’s de­ci­sion pro­ce­dure, and then tor­tures them if it is. The point of un­fair de­ci­sion prob­lems is that they are un­fair. • enu­mer­ates the wor­lds seen as pos­si­ble by ev­ery agent, no mat­ter what their de­ci­sion the­ory is Can you clar­ify this? One in­ter­pre­ta­tion is that you’re talk­ing about an agent who enu­mer­ates ev­ery world that any agent sees as pos­si­ble. But your post fur­ther down seems to con­tra­dict this, “the un­punched world is not a pos­si­ble one for the world enu­mer­a­tor”. And it’s not ob­vi­ous to me that this agent can ex­ist. Another is that the agent enu­mer­ates only the wor­lds that ev­ery agent sees as pos­si­ble, but that agent doesn’t seem likely to get good re­sults. And it’s not ob­vi­ous to me that there are guaran­teed to be any wor­lds at all in this in­ter­sec­tion. Am I miss­ing an in­ter­pre­ta­tion? • Great post! I have a ques­tion, though, about the “ad­ver­sar­ial pre­dic­tor” sec­tion. My ques­tion is: how is world #3 pos­si­ble? You say: 1. Agent uses DT1 when re­warded for us­ing DT1 and DT2 when re­warded for us­ing DT2 How­ever, the prob­lem state­ment said: Imag­ine I have a copy of Fiona, and I pun­ish any­one who takes the same ac­tion as the copy. Are we to sup­pose that the copy of Fiona that the ad­ver­sar­ial pre­dic­tor is run­ning does not know that an ad­ver­sar­ial pre­dic­tor is pun­ish­ing Fiona for tak­ing cer­tain ac­tions, but that the ac­tual-Fiona does know this, and can thus de­vi­ate from what she would oth­er­wise do? If so, then what hap­pens when this as­sump­tion is re­moved—i.e., when we do not in­form Fiona that she is be­ing watched (and pos­si­bly pun­ished) by an ad­ver­sar­ial pre­dic­tor, or when we do in­form copy-Fiona of same? • One would have to ask Eliezer and Nat what they re­ally meant, since it is easy to end up in a self-con­tra­dic­tory setup or to ask a ques­tion about an im­pos­si­ble world, like to ask­ing what hap­pens if in the New­comb’s setup the agent de­cided to switch to two-box­ing af­ter the perfect pre­dic­tor had already put$1,000,000 in.

My wild guess is that the FDT Fiona from the pa­per uses a cer­tain de­ci­sion the­ory DT1 that does not cope well with the world with ad­ver­sar­ial pre­dic­tors. She uses some kind of causal de­ci­sion graph logic that would lead her astray in­stead of be­ing in the win­ning world. I also as­sume that Fiona makes her “de­ci­sions” while be­ing fully in­formed about the pre­dic­tor’s in­ten­tions to pun­ish her and just CDT-like throws her hands in the air and cries “un­fair!”

• Hey, no­ticed what might be er­rors in your le­sion chart: No le­sion, no can­cer should give +1m utils in both cases. And your prob­a­bil­ities don’t add to 1. In­clud­ing p(le­sion) ex­plic­itly doesn’t mean­ingfully change the EV differ­ence, so eh. How­ever, my un­der­stand­ing is that the core of the le­sion prob­lem is rec­og­niz­ing that p(le­sion) is in­de­pen­dent of smok­ing; EYNS seems to say the same. Might be worth in­clud­ing it to make that clearer?

(I don’t know much about de­ci­sion the­ory, so maybe I’m just con­fused.)

• As­sum­ing that an agent who doesn’t have the le­sion gains no util­ity from smok­ing OR from hav­ing can­cer changes the prob­lem.

But apart from that, this post is pretty good at ex­plain­ing how to ap­proach these prob­lems from the per­spec­tive of Time­less De­ci­sion The­ory. Worth read­ing about it if you aren’t fa­mil­iar.

Also, is gen­er­ally agreed that in a de­ter­minis­tic world we don’t re­ally make de­ci­sions as per liber­tar­ian free will. The ques­tion is then how to con­struct the coun­ter­fac­tu­als for the de­ci­sion prob­lem. I’m in agree­ment with you TDT is much more con­sis­tent as the coun­ter­fac­tu­als tend to de­scribe ac­tu­ally con­sis­tent wor­lds.

• I’m slightly con­fused. Is it that we’re learn­ing about which world we are in or, given that coun­ter­fac­tu­als don’t ac­tu­ally ex­ist, are we learn­ing what our own de­ci­sion the­ory is given some stream of events/​wor­ldline?

• What is the differ­ence be­tween the two? The world in­cludes the agent, and dis­cov­er­ing more about the world im­plies self-discovery

• The com­pat­i­bil­ist con­cept of free will is prac­ti­cal. It tells you un­der which cir­cum­stances some­one can be held legally or eth­i­cally re­spon­si­ble. It does not re­quire global ad­di­tions about how the laws of the uni­verse work. Only when com­pat­i­bil­ist free will is as­serted as be­ing the only kind does it be­come a meta­phys­i­cal claim, or rather an anti meta­phys­i­cal one. The ex­is­tence of com­pat­i­bil­ist free will isn’t worth ar­gu­ing about: it’s de­signed to be com­pat­i­ble with a wide va­ri­ety of back­ground as­sump­tions.

Mag­i­cal, or “counter causal” free will is de­signed to be ab­surd and im­pos­si­ble from the out­set, and there­fore is not worth wor­ry­ing about ei­ther. (In­ci­den­tally, no pro­po­nent of liber­tar­i­anism ever uses the term “counter causal”)

What is worth wor­ry­ing about is broadly nat­u­ral­is­tic liber­tar­ian free will. That is, a con­cep­tion of free will that, un­like com­pat­i­bil­ist has some defea­si­ble re­quire­ments, such as in­de­ter­minis­tic laws of physics, but only re­quire­ments which are log­i­cally and phys­i­cally pos­si­ble. The mid­dle ground is where the ac­tion is. (Note that the mag­i­cal no­tion of free will is of­ten ac­cused of need­ing some fun­da­men­tal third al­ter­na­tive to de­ter­minism and chance, whereas nat­u­ral­is­tic liber­tar­ian ism only re­quires a mix­ture of the two struc­ture d in a cer­tain way

• The com­pat­i­bil­ist con­cept of free will is prac­ti­cal.

Yep, no qualms there. it is definitely the prag­matic ap­proach that works in the usual cir­cum­stances. The prob­lem arise when you start ex­plor­ing farther from the main­stream, where your in­tu­ition fails, like the New­comb’s prob­lem.

I don’t re­ally un­der­stand the rest of your point. The liber­tar­ian free will “our choices are free from the de­ter­mi­na­tion or con­straints of hu­man na­ture and free from any pre­de­ter­mi­na­tion by God” is pure mag­i­cal think­ing not grounded in sci­ence. There is no differ­ence be­tween de­ter­minism and chance in that sense, nei­ther is top-down cau­sa­tion. Scott Aaron­son sug­gested the Knigh­tian free bit op­tion as a source of true un­pre­dictabil­ity, which seems to be an in­her­ent re­quire­ment for the liber­tar­ian free will not based on magic. Be­ing in a simu­la­tion is an old standby, of course.

whereas nat­u­ral­is­tic liber­tar­ian ism only re­quires a mix­ture of the two struc­ture d in a cer­tain way

In what way?

• Per­haps I shoukd have bern clearer that com­plete de­ter­minism ver­sus in­de­ter­minism is an open ques­tion in sci­ence . But then maybe you knew, be­cause your you made a few refer­ences to in­de­ter­minism already. And maybe you knew be­cause the is­sue is cru­cial to the the cor­rect in­ter­pra­tion of QM , which is dis­cussed in­ter­minably here.

There is no differ­ence be­tween de­ter­minism and chance in that sense,

You hint very briefly at the he idea that ran­dom­ness doesn’t sup­port liber­tar­ian FW, but that is an open ques­tion in philos­o­phy. It has been given book-length treat­ments.

Scott Aaron­son sug­gested the Knigh­tian free bit op­tion as a source of true un­pre­dictabil­ity, which seems to be an in­her­ent re­quire­ment for the liber­tar­ian free will not based on magic

Which? Is in­de­ter­minism in­ca­pable of sup­port­ing FW as stated in the first quote , or ca­pa­ble as in the sec­ond?

But that is slightly beside the point, since our are ar­gu­ing against coun­ter­fac­tu­als, and the ex­is­tence of coun­ter­fac­tu­als fol­lows tatolo­gously from the ab­sence of strict de­ter­minism, ques­tions of free will aside

• We know that physics does not sup­port the idea of meta­phys­i­cal free will. By meta­phys­i­cal free will I mean the mag­i­cal abil­ity of agents to change the world by just mak­ing a de­ci­sion to do so. To the best of our knowl­edge, we are all (prob­a­bil­is­tic) au­toma­tons who think them­selves as agents with free choice

If a prob­a­b­lis­tic agent can make a de­ci­sion that is not fully de­ter­mined by pre­vi­ous events, then the con­se­quences of that de­ci­sion trace back to the agent, as a whole sys­tem, and no fur­ther. That seems to sup­port a re­spectable enough ver­sion of “chang­ing the fu­ture”. “Magic” might mean be­ing able to make any de­ci­sion, or carry through any de­ci­sion, or hav­ing a de­ci­sion mak­ing fac­ulty with no mov­ing parts. “Magic” is a term very worth taboo­ing.

• If a prob­a­b­lis­tic agent can make a de­ci­sion that is not fully de­ter­mined by pre­vi­ous events, then the con­se­quences of that de­ci­sion trace back to the agent, as a whole sys­tem, and no fur­ther.

Yes, if that view were sup­ported by ev­i­dence, that would count as free will. Thus far, when­ever we gain the tools to look fur­ther, we can trace the con­se­quences fur­ther back, with no clear bound­ary in sight, be­yond the in­her­ent ran­dom­ness of the ion chan­nels in the neu­rons firing ac­cord­ing to a suit­able Markov chain model.

• Well, which? Iron chains of causal­ity stretch­ing back to in­finity, or in­her­ent ran­dom­ness?

You may be tak­ing it as ob­vi­ous that both ran­dom­ness and de­ter­minism ex­clude (soem ver­sion of ) free will, but that needs to be spelt out.

• Scott Aaron­son in The Ghost in the Quan­tum Tur­ing Ma­chine does a good job spel­ling all this out. There is no phys­i­cal dis­tinc­tion be­tween an agent and a non-agent.

• Scott Aaron­son in The Ghost in the Quan­tum Tur­ing Ma­chine uses the word “agent” 37 times. The build­ing of agents is an en­g­ineer­ing dis­ci­pline. Much of the dis­cus­sion on this board is about AIs’ which are agen­tive as well as in­tel­li­gent.

You might mean there is no fun­da­men­tal differ­ence be­tween an agent and a non-agent. But then you need to show that some­one, some­where has as­serted that, rather than us­ing the word “agent” merely as a “use­ful” way of ex­press­ing some­thing non-fun­da­men­tal.

More pre­ci­sion is needed.

• Again, this is just a calcu­la­tion of ex­pected util­ities, though an agent be­liev­ing in meta­phys­i­cal free will may take it as a recom­men­da­tion to act a cer­tain way.

Are you not recom­mend­ing agents to act in a cer­tain way? You are an­swer­ing ques­tions from EYNS of the form “Should X do Y?”, and an­swers to such ques­tions are gen­er­ally taken to be recom­men­da­tions for X to act in a cer­tain way. You also say things like “The twins would prob­a­bly be smart enough to co­op­er­ate, at least af­ter read­ing this post” which sure sounds like a recom­men­da­tion of co­op­er­a­tion (if they do not co­op­er­ate, you are low­er­ing their sta­tus by call­ing them not smart)

• Are you not recom­mend­ing agents to act in a cer­tain way?

I have men­tioned in the ti­tle and in the first part that I do not sub­scribe to the idea of the meta­phys­i­cal free will. Sure, sub­jec­tively it feels like “recom­mend­ing” or “de­cid­ing” or “act­ing,” but there is no phys­i­cal ba­sis for treat­ing it as ac­tu­ally pick­ing one of the pos­si­ble wor­lds. What feels like mak­ing a de­ci­sion and see­ing the con­se­quences is noth­ing but dis­cov­er­ing which pos­si­ble world is ac­tual. In­ter­nally and ex­ter­nally.j

which sure sounds like a recom­men­da­tion of co­op­er­a­tion (if they do not co­op­er­ate, you are low­er­ing their sta­tus by call­ing them not smart)

“smart” is a state­ment about the ac­tual world con­tain­ing the twins, and if in­tel­li­gence cor­re­sponds to sta­tus in that world, then mak­ing low-util­ity de­ci­sions would cor­re­spond to low sta­tus. In gen­eral, I re­ject the in­ten­tional stance in this model. Para­dox­i­cally, it re­sults in bet­ter de­ci­sion mak­ing for those who use it to make de­ci­sions.

• My point was that in­tel­li­gence cor­re­sponds to sta­tus in our world: call­ing the twins not smart means that you ex­pect your read­ers to think less of them. If you don’t ex­pect that, then I don’t un­der­stand why you wrote that re­mark.

I don’t be­lieve in liber­tar­ian free will ei­ther, but I don’t see the point of in­ter­pret­ing words like “recom­mend­ing” “de­cid­ing” or “act­ing” to re­fer to im­pos­si­ble be­hav­ior rather than us­ing their or­di­nary mean­ings. How­ever, maybe that’s just a mean­ingless lin­guis­tic differ­ence be­tween us.

• My point was that in­tel­li­gence cor­re­sponds to sta­tus in our world: call­ing the twins not smart means that you ex­pect your read­ers to think less of them.

I can see why you would in­ter­pret it this way. That was not my in­ten­tion. I don’t re­spect For­rest Gumps any less than Ein­steins.

• You don’t har­bor any hopes that af­ter read­ing your post, some­one will de­cide to co­op­er­ate in the twin PD on the ba­sis of it? Or at least, if they were already go­ing to, that they would con­cep­tu­ally con­nect their de­ci­sion to co­op­er­ate with the things you say in the post?