Singularity the hard way

So far, we only have one known ex­am­ple of the de­vel­op­ment of in­tel­li­gent life; and that ex­am­ple is us. Hu­man­ity. That means that we have only one machanism that is known to be able to pro­duce in­tel­li­gent life; and that is evolu­tion. But by far the ma­jor­ity of life that is pro­duced by evolu­tion is not

in­tel­li­gent. (In fact, by far the ma­jor­ity of life pro­duced by evolu­tion ap­pears to be bac­te­ria, as far as I can tell. There’s also a lot of bee­tles).

Why did evolu­tion pro­duce such a steep climb in hu­man in­tel­li­gence, while not so much in the case of other crea­tures? That, I sus­pect, is at least par­tially be­cause as hu­mans we are not com­pet­ing against other crea­tures any­more. We are com­pet­ing against each other.

Also, once we man­aged to start writ­ing things down and shar­ing knowl­edge, we shifted off the slow, evolu­tion­ary timescale and onto the faster, tech­nolog­i­cal timescale. As tech­nol­ogy im­proves, we find our­selves be­ing more right, less wrong; our abil­ity to af­fect the en­vi­ron­ment con­tinu­ally in­creases. Our in­tel­lec­tual de­vel­op­ment, as a species, speeds up dra­mat­i­cally.

And I be­lieve that there is a hack that can be ap­plied to this pro­cess; a mechanism by which the to­tal in­tel­li­gence of hu­man­ity as a whole can be rather dra­mat­i­cally in­creased. (It will take time). The pro­cess is sim­ple enough in con­cept.


Th­ese thoughts were trig­gered by an ar­ti­cle on some Ethiopian chil­dren who were given tablets by OLPC. They were cho­sen speci­fi­cally on the ba­sis of illiter­acy (through the whole village) and were given no teach­ing (aside from the teach­ing apps on the tablets; some in­struc­tion on how to use the so­lar charg­ers was also given to the adults) and in fairly short or­der, they taught them­selves ba­sic liter­acy. (And had mod­ified the op­er­at­ing sys­tem to cus­tomise it, and re-en­able the cam­era).

My first thought was that this gives an up­per limit to the min­i­mum cost of world liter­acy; the min­i­mum cost of world liter­acy is limited to the cost of one tablet per child (plus a bit for trans­porta­tion).


In short, we need world liter­acy. World liter­acy will al­low any­one and ev­ery­one to read up on that which in­ter­ests them. It will al­low a vastly larger num­ber of peo­ple to start think­ing about cer­tain hard prob­lems (such as any hard prob­lem you care to name). It will al­low more eyes to look at sci­ence; more ex­per­i­ments to be done and pub­lished; more ar­mour-pierc­ing ques­tions which no-one has yet thought to ask be­cause there sim­ply are not enough sci­en­tists to ask them.

World liter­acy would im­prove the tech­nolog­i­cal progress of hu­man­ity; and prob­a­bly, af­ter enough gen­er­a­tions, re­sult in a hu­man­ity who we would, by to­days stan­dards, con­sider su­per­hu­manly in­tel­li­gent. (This may or may not ne­ces­si­tate di­rect brain-com­puter in­ter­faces)

The aim, there­fore, is to al­low hu­man­ity, and not some hu­man-made AI, to go *foom*. It will take some sig­nifi­cant amount of time—fol­low­ing this plan means that our gen­er­a­tion will do no more than con­tinue a pro­cess that be­gan some mil­lions of years ago—but it does have this ad­van­tage; if it is hu­man­ity that goes *foom*, then the re­sult­ing su­per­in­tel­li­gences are prac­ti­cally guaran­teed to be hu­man-Friendly since they will be hu­man. (For the mo­ment, I dis­card the pos­si­bil­ity of a suici­dal su­per­in­tel­li­gence).

It also has this ad­van­tage; the pro­cess is likely to be slow enough that a sig­nifi­cant frac­tion of hu­man­ity will be en­hanced at the same time, or close enough to the same time that none will be able to stop any of the oth­ers’ en­hance­ments. This dras­ti­cally re­duces the prob­a­bil­ity of be­ing trapped by a sin­gle Un­friendly en­hanced hu­man.

The main dis­ad­van­tage is the time taken; this will take cen­turies at the least, per­haps mil­le­nia. It is likely that, along the way, a more tra­di­tional AI will be cre­ated.