Ap­plause Lights

At the Sin­gu­lar­ity Sum­mit 2007, one of the speak­ers called for demo­cratic, mul­tina­tional de­vel­op­ment of AI. So I stepped up to the mi­cro­phone and asked:

Sup­pose that a group of demo­cratic re­pub­lics form a con­sor­tium to de­velop AI, and there’s a lot of politick­ing dur­ing the pro­cess—some in­terest groups have un­usu­ally large in­flu­ence, oth­ers get shaf­ted—in other words, the res­ult looks just like the products of mod­ern demo­cra­cies. Al­tern­at­ively, sup­pose a group of rebel nerds de­vel­ops an AI in their base­ment, and in­structs the AI to poll every­one in the world—drop­ping cell­phones to any­one who doesn’t have them—and do whatever the ma­jor­ity says. Which of these do you think is more “demo­cratic”, and would you feel safe with either?

I wanted to find out whether he be­lieved in the prag­matic ad­equacy of the demo­cratic polit­ical pro­cess, or if he be­lieved in the moral right­ness of vot­ing. But the speaker replied:

The first scen­ario sounds like an ed­it­or­ial in Reason magazine, and the second sounds like a Hol­ly­wood movie plot.

Con­fused, I asked:

Then what kind of demo­cratic pro­cess did you have in mind?

The speaker replied:

So­mething like the Hu­man Genome Pro­ject—that was an in­ter­na­tion­ally sponsored re­search pro­ject.

I asked:

How would dif­fer­ent in­terest groups re­solve their con­flicts in a struc­ture like the Hu­man Genome Pro­ject?

And the speaker said:

I don’t know.

This ex­change puts me in mind of a quote (which I failed to Google found by Jeff Grey and Miguel) from some dic­tator or other, who was asked if he had any in­ten­tions to move his pet state to­ward demo­cracy:

We be­lieve we are already within a demo­cratic sys­tem. Some factors are still miss­ing, like the ex­pres­sion of the people’s will.

The sub­stance of a demo­cracy is the spe­cific mech­an­ism that re­solves policy con­flicts. If all groups had the same pre­ferred policies, there would be no need for demo­cracy—we would auto­mat­ic­ally co­oper­ate. The res­ol­u­tion pro­cess can be a dir­ect ma­jor­ity vote, or an elec­ted le­gis­lature, or even a voter-sens­it­ive be­ha­vior of an AI, but it has to be some­thing. What does it mean to call for a “demo­cratic” solu­tion if you don’t have a con­flict-res­ol­u­tion mech­an­ism in mind?

I think it means that you have said the word “demo­cracy”, so the audi­ence is sup­posed to cheer. It’s not so much a pro­pos­i­tional state­ment, as the equi­val­ent of the “Ap­plause” light that tells a stu­dio audi­ence when to clap.

This case is re­mark­able only in that I mis­took the ap­plause light for a policy sug­ges­tion, with sub­sequent em­bar­rass­ment for all. Most ap­plause lights are much more blatant, and can be de­tec­ted by a simple re­versal test. For ex­ample, sup­pose someone says:

We need to bal­ance the risks and op­por­tun­it­ies of AI.

If you re­verse this state­ment, you get:

We shouldn’t bal­ance the risks and op­por­tun­it­ies of AI.

Since the re­versal sounds ab­nor­mal, the un­re­versed state­ment is prob­ably nor­mal, im­ply­ing it does not con­vey new in­form­a­tion. There are plenty of le­git­im­ate reas­ons for ut­ter­ing a sen­tence that would be un­in­form­at­ive in isol­a­tion. “We need to bal­ance the risks and op­por­tun­it­ies of AI” can in­tro­duce a dis­cus­sion topic; it can em­phas­ize the im­port­ance of a spe­cific pro­posal for bal­an­cing; it can cri­ti­cize an un­bal­anced pro­posal. Link­ing to a nor­mal as­ser­tion can con­vey new in­form­a­tion to a bounded ra­tion­al­ist—the link it­self may not be ob­vi­ous. But if no spe­cif­ics fol­low, the sen­tence is prob­ably an ap­plause light.

I am temp­ted to give a talk some­time that con­sists of noth­ing but ap­plause lights, and see how long it takes for the audi­ence to start laugh­ing:

I am here to pro­pose to you today that we need to bal­ance the risks and op­por­tun­it­ies of ad­vanced Ar­ti­fi­cial In­tel­li­gence. We should avoid the risks and, in­so­far as it is pos­sible, real­ize the op­por­tun­it­ies. We should not need­lessly con­front en­tirely un­ne­ces­sary dangers. To achieve these goals, we must plan wisely and ra­tion­ally. We should not act in fear and panic, or give in to tech­no­pho­bia; but neither should we act in blind en­thu­si­asm. We should re­spect the in­terests of all parties with a stake in the Sin­gu­lar­ity. We must try to en­sure that the be­ne­fits of ad­vanced tech­no­lo­gies ac­crue to as many in­di­vidu­als as pos­sible, rather than be­ing re­stric­ted to a few. We must try to avoid, as much as pos­sible, vi­ol­ent con­flicts us­ing these tech­no­lo­gies; and we must pre­vent massive de­struct­ive cap­ab­il­ity from fall­ing into the hands of in­di­vidu­als. We should think through these is­sues be­fore, not after, it is too late to do any­thing about them...