I know when the Singularity will occur

More precisely, if we suppose that sometime in the next 30 years, an artificial intelligence will begin bootstrapping its own code and explode into a super-intelligence, I can give you 2.3 bits of further information on when the Singularity will occur.

Between midnight and 5 AM, Pacific Standard Time.

Why? Well, first, let’s just admit this: The race to win the Singularity is over, and Google has won. They have the world’s greatest computational capacity, the most expertise in massively distributed processing, the greatest collection of minds interested in and capable of work on AI, the largest store of online data, the largest store of personal data, and the largest library of scanned, computer-readable books. That includes textbooks. Like, all of them. All they have to do is subscribe to Springer-Verlag’s online journals, and they’ll have the entire collected knowledge of humanity in computer-readable format. They almost certainly have the biggest research budget for natural language processing with which to interpret all those things. They have two of the four smartest executives in Silicon Valley.1 Their corporate strategy for the past 15 years can be approximated as “Win the Singularity.”2 If someone gave you a billion dollars today to begin your attempt, you’d still be 15 years and about two-hundred and ninety-nine billion dollars behind Google. If you believe in a circa-2030 Singularity, there isn’t enough time left for anybody to catch up with them.

(And I’m okay with that, considering that the other contenders include Microsoft and the NSA. But it alarms me that Google hasn’t gone into bioinformatics or neuroscience. Apparently their plans don’t include humans.)

So the first bootstrapping AI will be created at Google. It will be designed to use Google’s massive distributed server system. And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.

A more important implication is that this scenario decreases the possibility of FOOM. The AI will be designed to run on the computational resources available to Google, and they’ll build and test it as soon as they think that is just enough computational power for it to run. That means that its minimum computational requirements will be within one or two orders of magnitude of that of all the computers on Earth. (We don’t know how many servers Google has, but we know they installed their one millionth server on July 9, 2008. Google may—may—own less than 1% of the world’s CPU power, but connectivity within its system is vastly superior to that between other internet servers, let alone a botnet of random compromised PCs.)

So when the AI breaks out of the computational grid composed of all the Google data centers in the world, into the “vast wide world of the Internet”, it’s going to be very disappointed.

Of course, the distribution of computational power will change before then. Widespread teraflop GPU graphics cards could change this scenario completely in the next ten years.

In which case Google might take a sudden interest in GPUs...

ADDED:

Do I really believe all that? No. I do believe “Google wins” is a likely scenario—more likely than “X wins” for any other single value of X. Perhaps more importantly, you need to factor the size of the first AI built into your FOOM-speed probability distribution, because if the first AI is built by a large organization, with a lot of funding, that changes the FOOM paths open to it.

AI FOOMs if it can improve its own intelligence in one way or another. The people who build the first AI will make its algorithms as efficient as they are able to. For the AI to make itself more intelligent by scaling, it has to get more resources, while to make itself more intelligent by algorithm redesign, it will have to be smarter than the smartest humans who work on AI. The former is trivial for an AI built in a basement, but severely limited for an AI brought to life at the direction of Page and Brin.

The first “human-level” AI will probably be roughly as smart as a human, because people will try to build them before they can reach that level, the distribution of effectiveness-of-attempted-AIs will be skewed hard left, with many failures before the first success, and the first success will be a marginal improvement over a previous failure. That means the first AI will have about the same effective intelligence, regardless of how it’s built.

As smart as “a human” is closer to “some human” than to “all humans”. The first AI will almost certainly be at most as intelligent as the average human, and considerably less intelligent than its designers. But for an AI to make itself smarter through algorithm improvement requires the AI to have more intelligence than the smartest humans working on AI (the ones who just built it).

The easier, more-likely AI-foom path is: Build an AI as smart as a chimp. That AI grabs (or is given) orders of magnitude of resources, and gets smarter simply by brute force. THEN it redesigns itself.

That scaling-foom path is harder for AIs that start big than AIs that start small. This means that the probability distribution for FOOM speed depends on the probability distribution for the amount of dollars that will be spent to build the first AI.

Remember you are Bayesians. Your objective is not to accept or reject the hypothesis that the first AI will be developed according to this scenario. Your objective is to consider whether these ideas change the probability distribution you assign to FOOM speed.

The question I hope you’ll ask yourself now is not, “Won’t data centers in Asia outnumber those in America by then?”, nor, “Isn’t X smarter than Larry Page?”, but, “What is the probability distribution over <capital investment that will produce the first average-human-level AI>?” I expect that the probabilities will be dominated by large investments, because the probability distribution over “capital investment that will produce the first X” appears to me to be dominated in recent decades by large investments, for similarly-ambitious X such as “spaceflight to the moon” or “sequence of the human genome”. A very clever person could have invented low-cost genome sequencing in the 1990s and sequenced the genome him/​herself. But no very clever person did.


1. I’m counting Elon Musk and Peter Thiel as the others.

2. This doesn’t need to be intentional. Trying to dominate information search should look about the same as trying to win the Singularity. Think of it as a long chess game in which Brin and Page keep making good moves that strengthen their position. Eventually they’ll look around and find they’re in a position to checkmate the world.