Applause Lights

At the Sin­gu­lar­ity Sum­mit 2007, one of the speak­ers called for demo­cratic, multi­na­tional de­vel­op­ment of ar­tifi­cial in­tel­li­gence. So I stepped up to the micro­phone and asked:

Sup­pose that a group of demo­cratic re­pub­lics form a con­sor­tium to de­velop AI, and there’s a lot of politick­ing dur­ing the pro­cess—some in­ter­est groups have un­usu­ally large in­fluence, oth­ers get shafted—in other words, the re­sult looks just like the prod­ucts of mod­ern democ­ra­cies. Alter­na­tively, sup­pose a group of rebel nerds de­vel­ops an AI in their base­ment, and in­structs the AI to poll ev­ery­one in the world—drop­ping cel­l­phones to any­one who doesn’t have them—and do what­ever the ma­jor­ity says. Which of these do you think is more “demo­cratic,” and would you feel safe with ei­ther?

I wanted to find out whether he be­lieved in the prag­matic ad­e­quacy of the demo­cratic poli­ti­cal pro­cess, or if he be­lieved in the moral right­ness of vot­ing. But the speaker replied:

The first sce­nario sounds like an ed­i­to­rial in Rea­son mag­a­z­ine, and the sec­ond sounds like a Hol­ly­wood movie plot.

Con­fused, I asked:

Then what kind of demo­cratic pro­cess did you have in mind?

The speaker replied:

Some­thing like the Hu­man Genome Pro­ject—that was an in­ter­na­tion­ally spon­sored re­search pro­ject.

I asked:

How would differ­ent in­ter­est groups re­solve their con­flicts in a struc­ture like the Hu­man Genome Pro­ject?

And the speaker said:

I don’t know.

This ex­change puts me in mind of a quote from some dic­ta­tor or other, who was asked if he had any in­ten­tions to move his pet state to­ward democ­racy:

We be­lieve we are already within a demo­cratic sys­tem. Some fac­tors are still miss­ing, like the ex­pres­sion of the peo­ple’s will.

The sub­stance of a democ­racy is the spe­cific mechanism that re­solves policy con­flicts. If all groups had the same preferred poli­cies, there would be no need for democ­racy—we would au­to­mat­i­cally co­op­er­ate. The re­s­olu­tion pro­cess can be a di­rect ma­jor­ity vote, or an elected leg­is­la­ture, or even a voter-sen­si­tive be­hav­ior of an ar­tifi­cial in­tel­li­gence, but it has to be some­thing. What does it mean to call for a “demo­cratic” solu­tion if you don’t have a con­flict-re­s­olu­tion mechanism in mind?

I think it means that you have said the word “democ­racy,” so the au­di­ence is sup­posed to cheer. It’s not so much a propo­si­tional state­ment or be­lief, as the equiv­a­lent of the “Ap­plause” light that tells a stu­dio au­di­ence when to clap.

This case is re­mark­able only in that I mis­took the ap­plause light for a policy sug­ges­tion, with sub­se­quent em­bar­rass­ment for all. Most ap­plause lights are much more blatant, and can be de­tected by a sim­ple re­ver­sal test. For ex­am­ple, sup­pose some­one says:

We need to bal­ance the risks and op­por­tu­ni­ties of AI.

If you re­verse this state­ment, you get:

We shouldn’t bal­ance the risks and op­por­tu­ni­ties of AI.

Since the re­ver­sal sounds ab nor­mal, the un­re­versed state­ment is prob­a­bly nor­mal, im­ply­ing it does not con­vey new in­for­ma­tion.

There are plenty of le­gi­t­i­mate rea­sons for ut­ter­ing a sen­tence that would be un­in­for­ma­tive in iso­la­tion. “We need to bal­ance the risks and op­por­tu­ni­ties of AI” can in­tro­duce a dis­cus­sion topic; it can em­pha­size the im­por­tance of a spe­cific pro­posal for bal­anc­ing; it can crit­i­cize an un­bal­anced pro­posal. Link­ing to a nor­mal as­ser­tion can con­vey new in­for­ma­tion to a bounded ra­tio­nal­ist—the link it­self may not be ob­vi­ous. But if no speci­fics fol­low, the sen­tence is prob­a­bly an ap­plause light.

I am tempted to give a talk some­time that con­sists of noth­ing but ap­plause lights, and see how long it takes for the au­di­ence to start laugh­ing:

I am here to pro­pose to you to­day that we need to bal­ance the risks and op­por­tu­ni­ties of ad­vanced ar­tifi­cial in­tel­li­gence. We should avoid the risks and, in­so­far as it is pos­si­ble, re­al­ize the op­por­tu­ni­ties. We should not need­lessly con­front en­tirely un­nec­es­sary dan­gers. To achieve these goals, we must plan wisely and ra­tio­nally. We should not act in fear and panic, or give in to techno­pho­bia; but nei­ther should we act in blind en­thu­si­asm. We should re­spect the in­ter­ests of all par­ties with a stake in the Sin­gu­lar­ity. We must try to en­sure that the benefits of ad­vanced tech­nolo­gies ac­crue to as many in­di­vi­d­u­als as pos­si­ble, rather than be­ing re­stricted to a few. We must try to avoid, as much as pos­si­ble, vi­o­lent con­flicts us­ing these tech­nolo­gies; and we must pre­vent mas­sive de­struc­tive ca­pa­bil­ity from fal­ling into the hands of in­di­vi­d­u­als. We should think through these is­sues be­fore, not af­ter, it is too late to do any­thing about them . . .