Com­plete Class: Con­sequen­tial­ist Foundations

The fun­da­ment­als of Bayesian think­ing have been jus­ti­fied in many ways over the years. Most people here have heard of the VNM ax­ioms and Dutch Book ar­gu­ments. Far fewer, I think, have heard of the Com­plete Class The­or­ems (CCT).

Here, I ex­plain why I think of CCT as a more purely con­sequen­tial­ist found­a­tion for de­cision the­ory. I also show how com­plete-class style ar­gu­ments play a role is so­cial choice the­ory, jus­ti­fy­ing util­it­ari­an­ism and a ver­sion of futarchy. This means CCT acts as a bridging ana­logy between single-agent de­cisions and col­lect­ive de­cisions, there­fore shed­ding some light on how a pile of agent-like pieces can come to­gether and act like one agent. To me, this sug­gests a po­ten­tially rich vein of in­tel­lec­tual ore.

I have some ideas about modi­fy­ing CCT to be more in­ter­est­ing for MIRI-style de­cision the­ory, but I’ll only do a little of that here, mostly ges­tur­ing at the prob­lems with CCT which could mo­tiv­ate such modi­fic­a­tions.


Background

My Motives

This post is a con­tinu­ation of what I star­ted in Gen­er­al­iz­ing Found­a­tions of De­cision The­ory and Gen­er­al­iz­ing Found­a­tions of De­cision The­ory II. The core mo­tiv­a­tion is to un­der­stand the jus­ti­fic­a­tion for ex­ist­ing de­cision the­ory very well, see which as­sump­tions are weak­est, and see what hap­pens when we re­move them.

There is also a sec­ond­ary mo­tiv­a­tion in hu­man (ir)ra­tion­al­ity: to the ex­tent found­a­tional ar­gu­ments are real reas­ons why ra­tional be­ha­vior is bet­ter than ir­ra­tional be­ha­vior, one might ex­pect these ar­gu­ments to be help­ful in teach­ing or train­ing ra­tion­al­ity. This is re­lated to my cri­terion of con­sequen­tial­ism: the ar­gu­ment in fa­vor of Bayesian de­cision the­ory should dir­ectly point to why it mat­ters.

With re­spect to this second quest, CCT is in­ter­est­ing be­cause Dutch Book and money-pump ar­gu­ments point out ir­ra­tion­al­ity in agents by ex­ploit­ing the ir­ra­tional agent. CCT is more amen­able to a model in which you point out ir­ra­tion­al­ity by help­ing the ir­ra­tional agent. I am work­ing on a more thor­ough ex­pan­sion of that view with some co-au­thors.

Other Foundations

(Skip this sec­tion if you just want to know about CCT, and not why I claim it is bet­ter than al­tern­at­ives.)

I give an over­view of many pro­posed found­a­tional ar­gu­ments for Bayesian­ism in the first post in this series. I called out Dutch Book and money-pump ar­gu­ments as the most prom­ising, in terms of mo­tiv­at­ing de­cision the­ory only from “win­ning”. The second post in the series at­temp­ted to mo­tiv­ate all of de­cision the­ory from only those two ar­gu­ments (ex­tend­ing work of Stu­art Arm­strong along those lines), and suc­ceeded. However, the res­ult­ing ar­gu­ment was in it­self not very sat­is­fy­ing. If you look at the struc­ture of the ar­gu­ment, it jus­ti­fies con­straints on de­cisions via prob­lems which would oc­cur in hy­po­thet­ical games in­volving money. Many philo­soph­ers have ar­gued that the Dutch Book ar­gu­ment is in fact a way of il­lus­trat­ing in­con­sist­ency in be­lief, rather than truly an ar­gu­ment that you must be con­sist­ent or else. I think this is right. I now think this is a ser­i­ous flaw be­hind both Dutch Book and money-pump ar­gu­ments. There is no pure con­sequen­tial­ist reason to con­strain de­cisions based on con­sist­ency re­la­tion­ships with thought ex­per­i­ments.

The po­s­i­tion I’m de­fend­ing in the cur­rent post has much in com­mon with the pa­per Ac­tu­al­ist Ra­tion­al­ity by C. Manski. My dis­agree­ment with him lies in his dis­missal of CCT as yet an­other bad ar­gu­ment. In my view, CCT seems to ad­dress his con­cerns al­most pre­cisely!

Caveat --

Dutch Book ar­gu­ments are fairly prac­tical. Bet­ting with people, or ask­ing them to con­sider hy­po­thet­ical bets, is a use­ful tool. It may even be what con­vinces someone to use prob­ab­il­it­ies to rep­res­ent de­grees of be­lief. However, the ar­gu­ment falls apart if you ex­am­ine it too closely, or at least re­quires ex­tra as­sump­tions which you have to ar­gue in a dif­fer­ent way. Sim­ply put, be­lief is not lit­er­ally the same thing as will­ing­ness to bet. Con­sequen­tial­ist de­cision the­or­ies are in the busi­ness of re­lat­ing be­liefs to ac­tions, not re­lat­ing be­liefs to bet­ting be­ha­vior.

Sim­il­arly, money-pump ar­gu­ments can some­times be ex­tremely prac­tical. The re­source you’re pumped of doesn’t need to be money—it can simply be the cost of think­ing longer. If you spin forever between dif­fer­ent op­tions be­cause you prefer straw­berry ice cream to chocol­ate and chocol­ate to vanilla and vanilla to straw­berry, you will not get any ice cream. However, the set-up to money pump as­sumes that you will not no­tice this hap­pen­ing; whatever the ex­tra cost of in­de­cision is, it is placed out­side of the con­sid­er­a­tions which can in­flu­ence your de­cision.

So, Dutch Book “defines” be­lief as will­ing­ness-to-bet, and money-pump “defines” pref­er­ence as will­ing­ness-to-pay; in do­ing so, both ar­gu­ments put the jus­ti­fic­a­tion of de­cision the­ory into hy­po­thet­ical ex­ploit­a­tion scen­arios which are not quite the same as the ac­tual de­cisions we face. If these were the best jus­ti­fic­a­tions for con­sequen­tial­ism we could muster, I would be some­what dis­sat­is­fied, but would likely leave it alone. For­tunately, a bet­ter al­tern­at­ive ex­ists: com­plete class the­or­ems.

Four Com­plete Class Theorems

For a thor­ough in­tro­duc­tion to com­plete class the­or­ems, I re­com­mend Peter Hoff’s course notes. I’m go­ing to walk through four com­plete class the­or­ems deal­ing with what I think are par­tic­u­larly in­ter­est­ing cases. Here’s a map:

In words: first we’ll look at the stand­ard setup, which as­sumes like­li­hood func­tions. Then we will re­move the as­sump­tion of like­li­hood func­tions, since we want to ar­gue for prob­ab­il­ity the­ory from scratch. Then, we will switch from talk­ing about de­cision the­ory to so­cial choice the­ory, and use CCT to de­rive a vari­ant of Harsa­nyi’s util­it­arian the­orem, AKA Harsa­nyi’s so­cial ag­greg­a­tion the­orem, which tells us about co­oper­a­tion between agents with com­mon be­liefs (but dif­fer­ent util­ity func­tions). Fin­ally, we’ll add like­li­hoods back in. This gets us a ver­sion of Critch’s multi-ob­ject­ive learn­ing frame­work, which tells us about co­oper­a­tion between agents with dif­fer­ent be­liefs and dif­fer­ent util­ity func­tions.

I think of Harsa­nyi’s util­it­ari­an­ism the­orem as the best jus­ti­fic­a­tion for util­it­ari­an­ism, in much the same way that I think of CCT as the best jus­ti­fic­a­tion for Bayesian de­cision the­ory. It is not an ar­gu­ment that your per­sonal val­ues are ne­ces­sar­ily util­it­arian-al­tru­ism. However, it is a strong ar­gu­ment for util­it­arian al­tru­ism as the most co­her­ent way to care about oth­ers; and fur­ther­more, to the ex­tent that groups can make ra­tional de­cisions, I think it is an ex­tremely strong ar­gu­ment that the group de­cision should be util­it­arian. AlexMennen dis­cusses the the­orem and im­plic­a­tions for CEV here.

I some­what jok­ingly think of Critch’s vari­ation as “Critch’s Futarchy the­orem”—in the same way that Harsa­nyi shows that util­it­ari­an­ism is the unique way to make ra­tional col­lect­ive de­cisions when every­one agrees about the facts on the ground, Critch shows that ra­tional col­lect­ive de­cisions when there is dis­agree­ment must in­volve a bet­ting mar­ket. However, Critch’s con­clu­sion is not quite Futarchy. It is more ex­treme: in Critch’s frame­work, agents bet their vot­ing stake rather than money! The more bets you win, the more con­trol you have over the sys­tem; the more bets you lose, the less your pref­er­ences will be taken into ac­count. This is, per­haps, rather harsh in com­par­ison to gov­ernance sys­tems we would want to im­ple­ment. However, ra­tional agents of the clas­sical Bayesian vari­ety are happy to make this trade.

Without fur­ther adieu, let’s dive into the the­or­ems.

Basic CCT

We set up de­cision prob­lems like this:

  • is the set of pos­sible states of the ex­ternal world.
  • is the set of pos­sible ob­ser­va­tions.
  • is the set of ac­tions which the agent can take.
  • is a like­li­hood func­tion, giv­ing the prob­ab­il­ity of an ob­ser­va­tion un­der a par­tic­u­lar world-state .
  • is a set of de­cision rules. For , out­puts an ac­tion. Stochastic de­cision rules are al­lowed, though, in which case we should really think of it as out­put­ting an ac­tion prob­ab­il­ity.
  • , the loss func­tion, takes a world and an ac­tion and re­turns a real-val­ued “loss”. en­codes pref­er­ences: the lower the loss, the bet­ter. One way of think­ing about this is that the agent knows how its ac­tions play out in each pos­sible world; the agent is only un­cer­tain about con­sequences be­cause it doesn’t know which pos­sible world is the case.

In this post, I’m only go­ing to deal with cases where and are fi­nite. This is not a minor the­or­et­ical con­veni­ence—things get sig­ni­fic­antly more com­plic­ated with un­boun­ded sets, and the jus­ti­fic­a­tion for Bayesian­ism in par­tic­u­lar is weaker. So, it’s po­ten­tially quite in­ter­est­ing. However, there’s only so much I want to deal with in one post.

Some more defin­i­tions:

The risk of a policy in a par­tic­u­lar true world-state: =.

A de­cision rule is a pareto im­prove­ment over an­other rule if and only if for all , and strictly > for at least one. This is typ­ic­ally called dom­in­ance in treat­ments of CCT, but it’s ex­actly par­al­lel to the idea of pareto-im­prove­ment from eco­nom­ics and game the­ory: every­one is at least as well off, and at least one per­son is bet­ter off. An im­prove­ment which harms no one. The only dif­fer­ence here is that it’s with re­spect to pos­sible states, rather than people.

A de­cision rule is ad­miss­ible if and only if there is no pareto im­prove­ment over it. The idea is that there should be no reason not to take pareto im­prove­ments, since you’re only do­ing bet­ter no mat­ter what state the world turns out to be in. (We could also call this pareto-op­timal.)

A class of de­cision rules is a com­plete class if and only if for any rule not in , , there ex­ists a rule in which is a pareto im­prove­ment. Note, not every rule in a com­plete class will be ad­miss­ible it­self. In par­tic­u­lar, the set of all de­cision rules is a com­plete class. So, the com­plete class is a device for prov­ing a weaker res­ult than ad­miss­ib­il­ity. This will ac­tu­ally be a bit silly for the fi­nite case, be­cause we can char­ac­ter­ize the set of ad­miss­ible de­cision rules. However, it is the name­sake of com­plete class the­or­ems in gen­eral; so, I figured that it would be con­fus­ing not to in­clude it here.

Given a prob­ab­il­ity dis­tri­bu­tion on world-states, the Bayes risk is the ex­pec­ted risk over worlds, IE: .

A prob­ab­il­ity dis­tri­bu­tion is non-dog­matic when for all .

A de­cision rule is bayes-op­timal with re­spect to a dis­tri­bu­tion if it min­im­izes Bayes risk with re­spect to . (This is usu­ally called a Bayes rule with re­spect to , but that seems fairly con­fus­ing, since it sounds like “Bayes’ rule” aka Bayes’ the­orem.)

THEOREM: When and are fi­nite, de­cision rules which are bayes-op­timal with re­spect to a non-dog­matic are ad­miss­ible.

PROOF: On the one hand, if is Bayes-op­timal with re­spect to non-dog­matic , it min­im­izes the ex­pect­a­tion . Since for each world, any pareto-im­prove­ment (which must be strictly bet­ter in some world, and not worse in any) must de­crease this ex­pect­a­tion. So, must be min­im­iz­ing the ex­pect­a­tion if it is Bayes-op­timal.

THEOREM: (ba­sic CCT) When and are fi­nite, a de­cision rule is ad­miss­ible if and only if it is Bayes-op­timal with re­spect to some prior .

PROOF: If is ad­miss­ible, we wish to show that it is Bayes-op­timal with re­spect to some .

A de­cision rule has a risk in each world; think of this as a vec­tor in . The set of achiev­able risk vec­tors in (given by all ) is con­vex, since we can make mixed strategies between any two de­cision rules. It is also closed, since and are fi­nite. Con­sider a risk vec­tor as a point in this space (not ne­ces­sar­ily achiev­able by any ). Define the lower quad­rant to be the set of points which would be pareto im­prove­ments if they were achiev­able by a de­cision rule. Note that for an ad­miss­ible de­cision rule with risk vec­tor , and are dis­joint. By the hy­per­plane sep­ar­a­tion the­orem, there is a sep­ar­at­ing hy­per­plane between and . We can define by tak­ing a vec­tor nor­mal to the hy­per­plane and nor­mal­iz­ing it to sub to one. This is a prior for which is Bayes-op­timal, es­tab­lish­ing the de­sired res­ult.

If this is con­fus­ing, I again sug­gest Peter Hoff’s course notes. However, here is a sim­pli­fied il­lus­tra­tion of the idea for two worlds, four pure ac­tions, and no ob­ser­va­tions:

(I used be­cause I am more com­fort­able with think­ing of “good” as “up”, IE, think­ing in terms of util­ity rather than loss.)

The black “corners” com­ing from and show the be­gin­ning of the Q() set for those two points. (You can ima­gine the other two, for and .) Noth­ing is pareto-dom­in­ated ex­cept for , which is dom­in­ated by everything. In eco­nom­ics ter­min­o­logy, the first three ac­tions are on the pareto fron­tier. In par­tic­u­lar, is not pareto-dom­in­ated. Put­ting some num­bers to it, could be worth (2,2), that is, worth two in each world. could be worth (1,10), and could be worth (10,1). There is no prior over the two worlds in which a Bayesian would want to take ac­tion . So, how do we rule it out through our ad­miss­ib­il­ity re­quire­ment? We add mixed strategies:

Now, there’s a new pareto fron­tier: the line stretch­ing between and , con­sist­ing of strategies which have some prob­ab­il­ity of tak­ing those two ac­tions. Everything else is pareto-dom­in­ated. An agent who starts out con­sid­er­ing can see that mix­ing between and is just a bet­ter idea, no mat­ter what world they’re in. This is the es­sence of the CCT ar­gu­ment.

Once we move to the pareto fron­tier of the set of mixed strategies, we can draw the sep­ar­at­ing hy­per­planes men­tioned in the proof:

(There may be a unique line, or sev­eral sep­ar­at­ing lines.) The sep­ar­at­ing hy­per­plane al­lows us to de­rive a (non-dog­matic) prior which the chosen de­cision rule is con­sist­ent with.

Re­mov­ing Like­li­hoods (and other un­for­tu­nate as­sump­tions)

As­sum­ing the ex­ist­ence of a like­li­hood func­tion is rather strange, if our goal is to ar­gue that agents should use prob­ab­il­ity and ex­pec­ted util­ity to make de­cisions. A pur­por­ted de­cision-the­or­etic found­a­tion should not as­sume that an agent has any prob­ab­il­istic be­liefs to start out.

For­tunately, this is an ex­tremely easy modi­fic­a­tion of the ar­gu­ment: re­strict­ing to either be zero or one is just a spe­cial case of the ex­ist­ing the­orem. This does not limit our ex­press­ive power. Pre­vi­ously, a world in which the true tem­per­at­ure is zero de­grees would have some prob­ab­il­ity of emit­ting the ob­ser­va­tion “the tem­per­at­ure is one de­gree”, due to ob­ser­va­tion er­ror. Now, we con­sider the er­ror a part of the world: there is a world where the true tem­per­at­ure is zero and the meas­ure­ment is one, as well as one where the true tem­per­at­ure is zero and the meas­ure­ment is zero, and so on.

Another re­lated con­cern is the as­sump­tion that we have mixed strategies, which are de­scribed via prob­ab­il­it­ies. Un­for­tu­nately, this is much more cent­ral to the ar­gu­ment, so we have to do a lot more work to re-state things in a way which doesn’t as­sume prob­ab­il­it­ies dir­ectly. Bear with me—it’ll be a few para­graphs be­fore we’ve done enough work to elim­in­ate the as­sump­tion that mixed strategies are de­scribed by prob­ab­il­it­ies.

It will be easier to first get rid of the as­sump­tion that we have car­dinal-val­ued loss . In­stead, as­sume that we have an or­dinal pref­er­ence for each world, . We then ap­ply the VNM the­orem within each , to get a car­dinal-val­ued util­ity within each world. The CCT ar­gu­ment can then pro­ceed as usual.

Ap­ply­ing VNM is a little un­sat­is­fy­ing, since we need to as­sume the VNM ax­ioms about our pref­er­ences. Hap­pily, it is easy to weaken the VNM ax­ioms, in­stead let­ting the as­sump­tions from the CCT set­ting do more work. A de­tailed write-up of the fol­low­ing is be­ing worked on, but to briefly sketch:

First, we can get rid of the in­de­pend­ence ax­iom. A mixed strategy is really a strategy which in­volves ob­serving coin-flips. We can put the coin-flips in­side the world (break­ing each into more sub-worlds in which coin-flips come out dif­fer­ently). When we do this, the in­de­pend­ence ax­iom is a con­sequence of ad­miss­ib­il­ity; any vi­ol­a­tion of in­de­pend­ence can be un­done by a pareto im­prove­ment.

Se­cond, hav­ing made coin-flips ex­pli­cit, we can get rid of the ax­iom of con­tinu­ity. We ap­ply the VNM-like the­orem from the pa­per Ad­dit­ive rep­res­ent­a­tion of sep­ar­able pref­er­ences over in­fin­ite products, by Mar­cus Piv­ato. This gives us car­dinal-val­ued util­ity func­tions, but without the con­tinu­ity ax­iom, our util­ity may some­times be rep­res­en­ted by in­fin­it­ies. (Spe­cific­ally, we can con­sider sur­real-numbered util­ity as the most gen­eral case.) You can as­sume this never hap­pens if it both­ers you.

More im­port­antly, at this point we don’t need to as­sume that mixed strategies are rep­res­en­ted via pre-ex­ist­ing prob­ab­il­it­ies any­more. In­stead, they’re rep­res­en­ted by the coins.

I’m fairly happy with this res­ult, and apo­lo­gize for the brief treat­ment. However, let’s move on for now to the com­par­ison to so­cial choice the­ory I prom­ised.

Utilitarianism

I said that are “pos­sible world states” and that there is an “agent” who is “un­cer­tain about which world-state is the case”—how­ever, no­tice that I didn’t really use any of that in the the­orem. What mat­ters is that for each , there is a pref­er­ence re­la­tion on ac­tions. CCT is ac­tu­ally about com­prom­ising between dif­fer­ent pref­er­ence re­la­tions.

If we drop the ob­ser­va­tions, we can in­ter­pret the as people, and the as po­ten­tial col­lect­ive ac­tions. The are po­ten­tial so­cial choices, which are ad­miss­ible when they are pareto-ef­fi­cient with re­spect to in­di­vidual’s pref­er­ences.

Mak­ing the hy­per­plane ar­gu­ment as be­fore, we get a which places pos­it­ive weight on each in­di­vidual. This is in­ter­preted as each in­di­vidual’s weight in the co­ali­tion. The col­lect­ive de­cision must be the res­ult of a (pos­it­ive) lin­ear com­bin­a­tion of each in­di­vidual’s car­dinal util­it­ies—and those car­dinal util­it­ies can in turn be con­struc­ted via an ap­plic­a­tion of VNM to in­di­vidual or­dinal pref­er­ences. This res­ult is very sim­ilar to Harsa­nyi’s util­it­ari­an­ism the­orem.

This is not only a nice ar­gu­ment for util­it­ari­an­ism, it is also an amus­ing math­em­at­ical pun, since it puts util­it­arian “so­cial util­ity” and de­cision-the­or­etic “ex­pec­ted util­ity” into the same math­em­at­ical frame­work. Just be­cause both can be de­rived via pareto-op­tim­al­ity ar­gu­ments doesn’t mean they’re ne­ces­sar­ily the same thing, though.

Harsa­nyi’s the­orem is not the most-cited jus­ti­fic­a­tion for util­it­ari­an­ism. One reason for this may be that it is “overly prag­matic”: util­it­ari­an­ism is about val­ues; Harsa­nyi’s the­orem is about co­her­ent gov­ernance. Harsa­nyi’s the­orem re­lies on ima­gin­ing a col­lect­ive de­cision which has to com­prom­ise between every­one’s val­ues, and spe­cifies what it must be like. Util­it­ari­ans don’t ima­gine such a global de­cision can really be made, but rather, are try­ing to spe­cify their own al­tru­istic val­ues. Non­ethe­less, a sim­ilar ar­gu­ment ap­plies: al­tru­istic val­ues are enough of a “global de­cision” that, hy­po­thet­ic­ally, you’d want to run the Harsa­nyi ar­gu­ment if you had de­scrip­tions of every­one’s util­ity func­tions and if you ac­cep­ted pareto im­prove­ments. So there’s an ar­gu­ment to be made that that’s still what you want to ap­prox­im­ate.

Another reason, men­tioned by Jes­s­icata in the com­ments, is that util­it­ari­ans typ­ic­ally value egal­it­ari­an­ism. Harsa­nyi’s the­orem only says that you must put some weight on each in­di­vidual, not that you have to be fair. I don’t think this is much of a prob­lem—just as CCT ar­gues for “some” prior, but real­istic agents have fur­ther con­sid­er­a­tions which make them skew to­wards max­im­ally spread out pri­ors, CCT in so­cial choice the­ory can tell us that we need some weights, and there can be ex­tra con­sid­er­a­tions which push us to­ward egal­it­arian weights. Harsa­nyi’s the­orem is still a strong ar­gu­ment for a big chunk of the util­it­arian po­s­i­tion.

Futarchy

Now, as prom­ised, Critch’s ‘futarchy’ the­orem.

If we add ob­ser­va­tions back in to the multi-agent in­ter­pret­a­tion, as­so­ci­ates each agent with a prob­ab­il­ity dis­tri­bu­tion on ob­ser­va­tions. This can be in­ter­preted as each agent’s be­liefs. In the pa­per Toward Nego­ti­able Rein­force­ment Learn­ing, Critch ex­amined pareto-op­timal se­quen­tial de­cision rules in this set­ting. Not only is there a func­tion which gives a weight for each agent in the co­ali­tion, but this is up­dated via Bayes’ Rule as ob­ser­va­tions come in. The in­ter­pret­a­tion of this is that the agents in the co­ali­tion want to bet on their dif­fer­ing be­liefs, so that agents who make more cor­rect bets gain more in­flu­ence over the de­cisions of the co­ali­tion.

This dif­fers from Robin Han­son’s futarchy, whose motto “vote on val­ues, but bet be­liefs” sug­gests that every­one gets an equal vote—you lose money when you bet, which loses you in­flu­ence on im­ple­ment­a­tion of pub­lic policy, but you still get an equal share of value. However, Critch’s ana­lysis shows that Robin’s ver­sion can be strictly im­proved upon, res­ult­ing in Critch’s ver­sion. (Also, Critch is not pro­pos­ing his solu­tion as a sys­tem of gov­ernance, only as a no­tion of multi-ob­ject­ive learn­ing.) Non­ethe­less, the spirit still seems sim­ilar to Futarchy, in that the con­trol of the sys­tem is dis­trib­uted based on bets.

If Critch’s sys­tem seems harsh, it is be­cause we wouldn’t really want to bet away all our share of the col­lect­ive value, nor do we want to pun­ish those who would bet away all their value too severely. This sug­gests that we (a) just wouldn’t bet everything away, and so wouldn’t end up too badly off; and (b) would want to still take care of those who bet their own value away, so that the con­sequences for those people would not ac­tu­ally be so harsh. Non­ethe­less, we can also try to take the prob­lem more ser­i­ously and think about al­tern­at­ive for­mu­la­tions which seem less strik­ingly harsh.

Conclusion

One po­ten­tial re­search pro­gram which may arise from this is: take the ana­logy between so­cial choice the­ory and de­cision the­ory very ser­i­ously. Look closely at more com­plic­ated mod­els of so­cial choice the­ory, in­clud­ing vot­ing the­ory and per­haps mech­an­ism design. Under­stand the struc­ture of ra­tional col­lect­ive choice in de­tail. Then, try to port the les­sons from this back to the in­di­vidual-agent case, to cre­ate de­cision the­or­ies more soph­ist­ic­ated than simple Bayes. Mir­ror­ing this on the four-quad­rant dia­gram from early on:

And, if you squint at this dia­gram, you can see the let­ters “CCT”.

(Clos­ing visual pun by Cas­par Öster­held.)