The Unique Games Conjecture and FAI: A Troubling Obstacle

I am not a com­puter sci­en­tist and do not know much about com­plex­ity the­ory. How­ever, it’s a field that in­ter­ests me, so I oc­ca­sion­ally browse some ar­ti­cles on the sub­ject. I was brought to https://​​www.simons­foun­da­​​math­e­mat­ics-and-phys­i­cal-sci­ence/​​ap­prox­i­mately-hard-the-unique-games-con­jec­ture/​​ by a link on Scott Aaron­son’s blog, and read the ar­ti­cle to reac­quaint my­self with the Unique Games Con­jec­ture, which I had par­tially for­got­ten about. If you are not fa­mil­iar with the UGC, that ar­ti­cle will ex­plain it to you bet­ter than I can.

One phrase in the ar­ti­cle stuck out to me: “there is some num­ber of col­ors k for which it is NP-hard (that is, effec­tively im­pos­si­ble) to dis­t­in­guish be­tween net­works in which it is pos­si­ble to satisfy at least 99% of the con­straints and net­works in which it is pos­si­ble to satisfy at most 1% of the con­straints”. I think this sen­tence is con­cern­ing for those in­ter­ested in the pos­si­bil­ity of cre­at­ing FAI.

It is im­pos­si­ble to perfectly satisfy hu­man val­ues, as mat­ter and en­ergy are limited, and so will be the ca­pa­bil­ities of even an enor­mously pow­er­ful AI. Thus, in try­ing to max­i­mize hu­man hap­piness, we are deal­ing with a prob­lem that’s es­sen­tially iso­mor­phic to the UGC’s col­or­ing prob­lem. Ad­di­tion­ally, our val­ues them­selves are ill-formed. Hu­man val­ues are nu­mer­ous, am­bigu­ous, even con­tra­dic­tory. Given the com­plex­ities of hu­man value sys­tems, I think it’s safe to say we’re deal­ing with a par­tic­u­larly nasty vari­a­tion of the prob­lem, worse than what com­puter sci­en­tists study­ing it have dealt with.

Not all spe­cific in­stances of com­plex op­ti­miza­tion prob­lems are sub­ject to the UGC and thus NP hard, of course. So this does not in it­self mean that build­ing an FAI is im­pos­si­ble. Also, even if max­i­miz­ing hu­man val­ues is NP hard (or max­i­miz­ing the prob­a­bil­ity of max­i­miz­ing hu­man val­ues, or max­i­miz­ing the prob­a­bil­ity of max­i­miz­ing the prob­a­bil­ity of hu­man val­ues) we can still as­sess a ma­chine’s code and ac­tions heuris­ti­cally. How­ever, even the best heuris­tics are limited, as the UGC it­self demon­strates. At bot­tom, all heuris­tics must rely on in­flex­ible as­sump­tions of some sort.

Minor ed­its.