What if people simply forecasted your future choices?

tldr: If you could have a team of smart fore­cast­ers pre­dict­ing your fu­ture de­ci­sions & ac­tions, they would likely im­prove them in ac­cor­dance with your episte­mol­ogy. This is a very broad method that’s less ideal than more re­duc­tion­ist ap­proaches for spe­cific things, but pos­si­bly sim­pler to im­ple­ment and like­lier to be ac­cepted by de­ci­sion mak­ers with com­plex mo­ti­va­tions.

Background

The stan­dard way of find­ing ques­tions to fore­cast in­volves a lot of work. As Zvi noted, ques­tions should be very well-defined, and com­ing up with in­ter­est­ing yet spe­cific ques­tions takes con­sid­er­able con­sid­er­a­tion.

One over­ar­ch­ing ques­tion is how pre­dic­tions can be used to drive de­ci­sion mak­ing. One recom­men­da­tion (one ver­sion called “De­ci­sion Mar­kets”) of­ten comes down to es­ti­mat­ing fu­ture pa­ram­e­ters, con­di­tional on each of a set of choices. Another op­tion is to have ex­pert eval­u­a­tors prob­a­bil­is­ti­cally eval­u­ate each op­tion, and have pre­dic­tors pre­dict their eval­u­a­tions (Pre­dic­tion-Aug­mented Eval­u­a­tions.)

Proposal

One pre­dic­tion pro­posal I sug­gest is to have pre­dic­tors sim­ply pre­dict the fu­ture ac­tions & de­ci­sions of agents. I tem­porar­ily call this an “ac­tion pre­dic­tion sys­tem.” The eval­u­a­tion pro­cess (the choos­ing pro­cess) would need to hap­pen any­way, and the ques­tion be­comes very sim­ple. This may seem too ba­sic to be use­ful, but I think it may be a lot bet­ter than at least I ini­tially ex­pected.

Say I’m try­ing to de­cide what lap­top I should pur­chase. I could have some pre­dic­tors pre­dict­ing which one I’ll de­cide on. In the be­gin­ning, the pre­dic­tion ag­gre­ga­tion shows that I have an 90% chance of choos­ing one op­tion. While I re­ally would like to be the kind of per­son who pur­chases a Len­ovo with Linux, I’ll prob­a­bly wind up buy­ing an­other Mac­book. The pre­dic­tors may re­al­ize that I typ­i­cally check Ama­zon re­views and the Wire­cut­ter for re­search, and they have a de­cent idea of what I’ll find when I even­tu­ally do.

It’s not clear to me how to best fo­cus pre­dic­tors on spe­cific un­cer­tain ac­tions I may take. It seems like I would want to ask them mostly about spe­cific de­ci­sions I am un­cer­tain of.

One im­por­tant as­pect is that I should have a line of com­mu­ni­ca­tion to the pre­dic­tors. This means that some clever ones may even­tu­ally catch on to prac­tices such as the fol­low­ing:

A fore­caster-sales strategy

  1. Find good de­ci­sion op­tions that have been overlooked

  2. Make fore­casts or bets on them succeeding

  3. Provide re­ally good ar­gu­ments and re­search as to why they are overlooked

If I, the lap­top pur­chaser, am skep­ti­cal, I could ig­nore the pre­dic­tion feed­back. But if I re­peat the pro­cess for other de­ci­sions even­tu­ally I should even­tu­ally de­velop a sense of trust in the ag­gre­ga­tion ac­cu­racy, and then in the pre­dic­tor abil­ity to un­der­stand my de­sires. I may also be very in­ter­ested in what that com­mu­nity has to say, as they have de­vel­oped a model of what my prefer­ences are. If I’m gen­er­ally a rea­son­able and in­tel­li­gent per­son, I could learn how to best rely on these pre­dic­tors to speed up and im­prove my fu­ture de­ci­sions.

In a way, this solu­tion doesn’t solve the prob­lem of “how to de­cide the best op­tion;” it just moves it into what may be a more man­age­able place. Over time I imag­ine that new strate­gies may emerge for what gen­er­ally con­sti­tutes “good ar­gu­ments”, and those will be adopted. In the mean­time, agents will be en­couraged to quickly choose op­tions they would gen­er­ally want, us­ing rea­son­ing tech­niques they gen­er­ally pre­fer. If one agent were re­ally con­vinced by a de­ci­sion mar­ket, then per­haps some fore­cast­ers would set one up in or­der to prove their point.

Failure Modes

There are few ob­vi­ous failure modes to such a setup. I think that it could dilute sig­nal qual­ity, but am not as wor­ried about some of the other ob­vi­ous ones.

Weak Signals

I think it’s fair to say that if one wanted to op­ti­mize for ex­pected value, ask­ing fore­cast­ers to pre­dict ac­tions in­stead could lead to weaker sig­nals. Fore­cast­ers would be es­ti­mat­ing a few things at once (how good an op­tion is, and how likely the agent is to choose it.) If the agent isn’t re­ally in­tent on op­ti­miz­ing for spe­cific things, and even if they are, it may be difficult to provide enough sig­nal in their prob­a­bil­ities of cho­sen de­ci­sions for them to be use­ful. I think this would have to be em­piri­cally tested un­der differ­ent con­di­tions.

There could also be com­plex feed­back loops, es­pe­cially for naive agents. An agent may trust its pre­dic­tors too much. If the pre­dic­tors be­lieve the agent is too trust­ing or trusts the wrong sig­nals, they could am­plify those sig­nals and find “easy sta­ble points.” I’m re­ally un­sure of how this would look or how much com­pe­tence the agent or pre­dic­tors would need to have net-benefi­cial out­comes. I’d be in­ter­ested in test­ing and pay­ing at­ten­tion to this failure mode.

That said, the refer­ence class of groups who were con­sid­er­ing and in­ter­ested in pay­ing for us­ing “ac­tion pre­dic­tions” vs. “de­ci­sion mar­kets” or similar is a very small one, and one that I ex­pect would be con­vinced only by pretty good ar­gu­ments. So prag­mat­i­cally, in the rare cases where the ques­tion of “would our or­ga­ni­za­tion be wise enough to get benefit from ac­tion pre­dic­tions” is asked, I’d ex­pect the an­swer to lean pos­i­tively. I wouldn’t ex­pect ob­vi­ously sleazy sales strate­gies to work to con­vince GiveWell of a new top cause area, for ex­am­ple.

Inevitable Failures

Say the pre­dic­tors re­al­ized that a MacBook wouldn’t make any sense for me, but that I was still 90% likely to choose it, even af­ter I heard all of the best ar­gu­ments. It would be some­what of an “in­evitable failure.” The amount of util­ity I get from each item could be very un­cor­re­lated with my chances of choos­ing that item, even af­ter hear­ing about that differ­ence.

While this may be un­for­tu­nate, it’s not ob­vi­ous what would work in these con­di­tions. The goal of pre­dic­tions shouldn’t be to pre­dict the fu­ture ac­cu­rately, but in­stead to help agents make bet­ter de­ci­sions. If there were a differ­ent sys­tem that did a great job out­lin­ing the nega­tive effect of a bad de­ci­sion to my life, but I pre­dictably ig­nored the sys­tem, then it just wouldn’t be use­ful, de­spite be­ing ac­cu­rate. Value of in­for­ma­tion would be low. It’s re­ally tough for a sys­tem of in­for­ma­tion to be so good as to be use­ful even when ig­nored.

I’d also ar­gue that the kinds of agents that would make pre­dictably poor de­ci­sions would be ones that re­ally aren’t in­ter­ested in get­ting ac­cu­rate and hon­est in­for­ma­tion. It could seem pretty bru­tal to them; ba­si­cally, it would in­volve them pay­ing for a sys­tem that con­tin­u­ously tells them that they are mak­ing mis­takes.

This pre­vi­ous dis­cus­sion has as­sumed that the agents mak­ing the de­ci­sions are the same ones pay­ing for the fore­cast­ing. This is not always the case, but in the coun­terex­am­ples, set­ting up other pro­pos­als could eas­ily be seen as hos­tile. If I set up a sys­tem to start eval­u­at­ing the ex­pected to­tal val­ues of all the ac­tions of my friend Ge­orge, know­ing that Ge­orge would sys­tem­at­i­cally ig­nore the main ones, I could imag­ine Ge­orge may not be very happy with his sub­si­dized eval­u­a­tions.

Prin­ci­pal-agent Problems

I think “ac­tion pre­dic­tions” would help agents fulfill their ac­tual goals, while other fore­cast­ing sys­tems would more help them fulfill their stated goals. This has ob­vi­ous costs and benefits.

Let’s con­sider a situ­a­tion with a CEO who wants to their com­pany to be as big as pos­si­ble, and cor­po­rate stake­hold­ers who want in­stead for the com­pany to be as prof­itable as pos­si­ble.

Say the CEO com­mits to “max­i­miz­ing share­holder rev­enue,” and com­mits to mak­ing de­ci­sions that do so. If there were a de­ci­sion mar­ket set up to tell how much “share­holder value” would be max­i­mized for each of a set of op­tions (differ­ent to a de­ci­sion pre­dic­tion sys­tem), and that in­for­ma­tion was pub­lic to share­hold­ers, then it would be ob­vi­ous to them when and how of­ten the CEO di­s­obeys that ad­vice. This would be a very trans­par­ent set up that would al­low the share­hold­ers to po­lice the CEO. It would take away a lot of flex­i­bil­ity and au­thor­ity of the CEO and place it in the hands of the de­ci­sion sys­tem.

On the con­trary, say the CEO in­stead shares a trans­par­ent ac­tion pre­dic­tion sys­tem. Pre­dic­tor par­ti­ci­pants would, in this case, try to un­der­stand the spe­cific mo­ti­va­tions of the CEO and op­ti­mize their ar­gu­ments as such. Even if they were be­ing po­liced by share­hold­ers, they could know this, and dis­guise their ar­gu­ments ac­cord­ingly. If dis­cussing and cor­rectly pre­dict­ing the net im­pact to share­hold­ers would be net harm­ful in terms of pre­dict­ing the CEO’s ac­tions and con­vinc­ing them as such, they could sim­ply ig­nore it, or bet­ter yet find con­vinc­ing ar­gu­ments not to take that ac­tion. I ex­pect that an ac­tion pre­dic­tion sys­tem would es­sen­tially act to am­plify the abil­ities of the de­cider, even if at the cost of other car­ing third par­ties.

Sales­per­son Melees

One ar­gu­ment against this is a gut re­ac­tion that it sounds very “salesy”, so prob­a­bly won’t work. While I agree there are some cases where it may not too work well (stated above in the weak sig­nal sec­tion), I think that smart peo­ple should be pos­i­tively aug­mented by good sales­man­ship un­der rea­son­able in­cen­tives.

In many cir­cum­stances, sales­peo­ple prac­ti­cally are re­ally use­ful. The in­dus­try is huge, and I’m un­der the im­pres­sion that at least a sig­nifi­cant frac­tion (>10%) is net-benefi­cial. Spe­cific kinds of tech­ni­cal and cor­po­rate sales come to mind, where the “sales” pro­fes­sion­als are some of the most use­ful for dis­cussing tech­ni­cal ques­tions with. There sim­ply aren’t other ser­vices will­ing to have lengthy dis­cus­sions about some top­ics.

Externalities

Pre­dic­tions used in this way would help the goals of the agents us­ing them, but these agents may be self-in­ter­ested, lead­ing to ad­di­tional nega­tive ex­ter­nal­ities on oth­ers. I think this pre­dic­tion pro­cess doesn’t at all help in mak­ing peo­ple more al­tru­is­tic. It sim­ply would help agents bet­ter satisfy their own prefer­ences. This is a com­mon as­pect to al­most all in­tel­li­gence-am­plifi­ca­tion pro­pos­als. I think it’s im­por­tant to con­sider, but I’m re­ally recom­mend­ing this pro­posal more as a “pos­si­ble pow­er­ful tool”, and not as a “tool that is ex­pected to be highly globally benefi­cial if used.” That would be a very sep­a­rate dis­cus­sion.