Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book

From a re­view of Greg Egan’s new book, Zen­degi:

Egan has always had difficulty in por­tray­ing char­ac­ters whose views he dis­agrees with. They always end up seem­ing like pup­pets or straw­men, pure mouth­pieces for a view­point. And this causes trou­ble in an­other strand of Zen­degi, which is a mildly satiri­cal look at tran­shu­man­ism. Now you can satirize by nas­ti­ness, or by mock­ery, but Egan is too nice for the former, and not ac­cu­rate enough at mimicry for the lat­ter. It ends up be­ing a bit fee­ble, and the tar­gets are not likely to be much hurt.

Who are the tar­gets of Egan’s satire? Well, here’s one of them, ap­peal­ing to Nasim to up­load him:

“I’m Nate Ca­plan.” He offered her his hand, and she shook it. In re­sponse to her sus­tained look of puz­zle­ment he added, “My IQ is one hun­dred and sixty. I’m in perfect phys­i­cal and men­tal health. And I can pay you half a mil­lion dol­lars right now, any way you want it. [...] when you’ve got the bugs ironed out, I want to be the first. When you start record­ing full synap­tic de­tails and scan­ning whole brains in high re­s­olu­tion—” [...] “You can always reach me through my blog,” he panted. “Over­pow­er­ing False­hood dot com, the num­ber one site for ra­tio­nal think­ing about the fu­ture—”

(We’re sup­posed, I think, to con­trast Ca­plan’s goal of per­sonal sur­vival with Martin’s goal of bring­ing up his son.)

“Over­pow­er­ing False­hood dot com” is trans­par­ently over­com­ing­, a blog set up by Robin Han­son of the Fu­ture of Hu­man­ity In­sti­tute and Eliezer Yud­kowsky of the Sin­gu­lar­ity In­sti­tute for Ar­tifi­cial In­tel­li­gence. Which is ironic, be­cause Yud­kowsky is Egan’s biggest fan: “Per­mu­ta­tion City [...] is sim­ply the best sci­ence-fic­tion book ever writ­ten” and his thoughts on tran­shu­man­ism were strongly in­fluenced by Egan: “Di­as­pora [...] af­fected my en­tire train of thought about the Sin­gu­lar­ity.”

Another tran­shu­man­ist group is the “Benign Su­per­in­tel­li­gence Boot­strap Pro­ject”—the name refer­ences Yud­kowsky’s idea of “Friendly AI” and the de­scrip­tion refer­ences Yud­kowsky’s ar­gu­ment that re­cur­sive self-op­ti­miza­tion could rapidly pro­pel an AI to su­per­in­tel­li­gence. From Zen­degi:

“Their aim is to build an ar­tifi­cial in­tel­li­gence ca­pa­ble of such exquisite pow­ers of self-anal­y­sis that it will de­sign and con­struct its own suc­ces­sor, which will be armed with su­pe­rior ver­sions of all the skills the origi­nal pos­sessed. The suc­ces­sor will pro­duce a still more profi­cient third ver­sion, and so on, lead­ing to a cas­cade of ex­po­nen­tially in­creas­ing abil­ities. Once this pro­cess is set in mo­tion, within weeks—per­haps within hours—a be­ing of truly God-like pow­ers will emerge.”

Egan por­trays the Boot­strap Pro­ject as a (pos­si­bly self-de­lud­ing, it’s not clear) con­fi­dence trick. The Pro­ject per­suades a billion­aire to donate his for­tune to them in the hope that the “be­ing of truly God-like pow­ers” will grant him im­mor­tal­ity come the Sin­gu­lar­ity. He dies dis­ap­pointed and the Pro­ject “turn[s] five billion dol­lars into noth­ing but padded salaries and empty ver­biage”.

(Origi­nal poin­ter via Kobayashi; Risto Saarelma found the re­view. I thought this was wor­thy of a sep­a­rate thread.)