Singularity the hard way

So far, we only have one known example of the development of intelligent life; and that example is us. Humanity. That means that we have only one machanism that is known to be able to produce intelligent life; and that is evolution. But by far the majority of life that is produced by evolution is not

intelligent. (In fact, by far the majority of life produced by evolution appears to be bacteria, as far as I can tell. There’s also a lot of beetles).

Why did evolution produce such a steep climb in human intelligence, while not so much in the case of other creatures? That, I suspect, is at least partially because as humans we are not competing against other creatures anymore. We are competing against each other.

Also, once we managed to start writing things down and sharing knowledge, we shifted off the slow, evolutionary timescale and onto the faster, technological timescale. As technology improves, we find ourselves being more right, less wrong; our ability to affect the environment continually increases. Our intellectual development, as a species, speeds up dramatically.

And I believe that there is a hack that can be applied to this process; a mechanism by which the total intelligence of humanity as a whole can be rather dramatically increased. (It will take time). The process is simple enough in concept.


These thoughts were triggered by an article on some Ethiopian children who were given tablets by OLPC. They were chosen specifically on the basis of illiteracy (through the whole village) and were given no teaching (aside from the teaching apps on the tablets; some instruction on how to use the solar chargers was also given to the adults) and in fairly short order, they taught themselves basic literacy. (And had modified the operating system to customise it, and re-enable the camera).

My first thought was that this gives an upper limit to the minimum cost of world literacy; the minimum cost of world literacy is limited to the cost of one tablet per child (plus a bit for transportation).


In short, we need world literacy. World literacy will allow anyone and everyone to read up on that which interests them. It will allow a vastly larger number of people to start thinking about certain hard problems (such as any hard problem you care to name). It will allow more eyes to look at science; more experiments to be done and published; more armour-piercing questions which no-one has yet thought to ask because there simply are not enough scientists to ask them.

World literacy would improve the technological progress of humanity; and probably, after enough generations, result in a humanity who we would, by todays standards, consider superhumanly intelligent. (This may or may not necessitate direct brain-computer interfaces)

The aim, therefore, is to allow humanity, and not some human-made AI, to go *foom*. It will take some significant amount of time—following this plan means that our generation will do no more than continue a process that began some millions of years ago—but it does have this advantage; if it is humanity that goes *foom*, then the resulting superintelligences are practically guaranteed to be human-Friendly since they will be human. (For the moment, I discard the possibility of a suicidal superintelligence).

It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others’ enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.

The main disadvantage is the time taken; this will take centuries at the least, perhaps millenia. It is likely that, along the way, a more traditional AI will be created.