Which singularity schools plus the no singularity school was right?

TL;DR of this post: Accelerating change and Event Horizon were the most accurate schools, with Intelligence Explosion proving to be interestingly wrong (Discontinuities only make a new field for AI get off the ground, not solve the entire problem ala Nuclear weapons, and scaling does show discontinuities, but only in the sense that an intractable problem or paradigm becomes possible, not chaining discontinuities to entirely solve the problem at a superhuman level.) The non-singularitian scenarios were wrong in retrospect, but in the 2000s, it would have been somewhat reasonable to say that no singularity was going to happen.

In other words, the AI-PONR has already happened and we are living in a slow rolling singularity already.

Long answer: That’s the topic of this post.

Back in 2007, before deep learning and AI actually solved real problems and the AI winter was going strong, Eliezer Yudkowsky over at www.yudkowsky.net placed the Singularitians into 3 camps, which I will reproduce here for comparison:

Accelerating Change:

Core claim: Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.

Strong claim: Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.

Advocates: Ray Kurzweil, Alvin Toffler(?), John Smart

Event Horizon:

Core claim: For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.

Strong claim: To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.

Advocates: Vernor Vinge

Intelligence Explosion:

Core claim: Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.

Strong claim: This positive feedback cycle goes FOOM, like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.

Advocates: I. J. Good, Eliezer Yudkowsky

There’s a link to it to verify my claims on this topic: https://​​www.yudkowsky.net/​​singularity/​​schools

But we need a control group, given that these schools are biased toward believing a great change or rapture is coming, even in it’s most mild forms. So I will construct a nonsingularitian school to serve as a control group. There will be positive and negative scenarios here.

No Singularity (Positive):

Core Claim: The world won’t mind upload into digital life or have an AI God, nor will a catastrophe happen soon. It’s instead the boring future predicted by the market, which means closer and closer to linear or even sub-linear growth. The end of even a smooth exponential, combined with drastically dropping birthrates everywhere, means permanent stagnation. At this juncture in 2050, 2 things can happen: Slow extinction over thousands of years and continual failure to maintain birth rates, or the world stabilizes permanently into a 2.1 birthrate, never increasing it’s economy or technology again but still overall a rich world, meaning they are still satisfied with this state of affairs until the Sun burns out.

Strong Claim: Humans are still human, and human nature has been only weakly suppressed, and stagnation similar to the ancient era has returned, but they don’t care, since the population has also stagnated or slowly declining.

Advocates: The Market as a whole mostly expects the no singularity positive scenario.

Age of Malthusian Industrialism (No singularity, Negative):

Core claim: The 21st century turns out to be a disappointment in all respects. We do not merge with the Machine God, nor do we descend back into the Olduvai Gorge by way of the Fury Road. Instead, we get to experience the true torture of seeing the conventional, mainstream forecasts of all the boring, besuited economists, businessmen, and sundry beigeocrats pan out.

Strong Claim: Human genetic editing is banned by government edict around the world, to “protect human dignity” in the religious countries and “prevent inequality” in the religiously progressive ones. The 1% predictably flout these regulations at will, improving their progeny while keeping the rest of the human biomass down where they believe it belongs, but the elites do not have the demographic weight to compensate for plummeting average IQs as dysgenics decisively overtakes the Flynn Effect.

We discover that Kurzweil’s cake is a lie. Moore’s Law stalls, and the current buzz over deep learning turns into a permanent AI winter. Robin Hanson dies a disappointed man, though not before cryogenically freezing himself in the hope that he would be revived as an em. But Alcor goes bankrupt in 2145, and when it is discovered that somebody had embezzled the funds set aside for just such a contingency, nobody can be found to pay to keep those weird ice mummies around. They are perfunctorily tossed into a ditch, and whatever vestigial consciousness their frozen husks might have still possessed seeps and dissolves into the dirt along with their thawing lifeblood. A supermall is build on their bones around what is now an extremely crowded location in the Phoenix megapolis.

For the old concerns about graying populations and pensions are now ancient history. Because fertility preferences, like all aspects of personality, are heritable – and thus ultracompetitive in a world where the old Malthusian constraints have been relaxed – the “breeders” have long overtaken the “rearers” as a percentage of the population, and humanity is now in the midst of an epochal baby boom that will last centuries. Just as the human population rose tenfold from 1 billion in 1800 to 10 billion by 2100, so it will rise by yet another order of magnitude in the next two or three centuries. But this demographic expansion is highly dysgenic, so global average IQ falls by a standard deviation and technology stagnates. Sometime towards the middle of the millenium, the population will approach 100 billion souls and will soar past the carrying capacity of the global industrial economy.

Then things will get pretty awful.

But as they say, every problem contains the seed of its own solution. Gnon sets to winnowing the population, culling the sickly, the stupid, and the spendthrift. As the neoreactionary philosopher Nick Land notes, waxing Lovecraftian, “There is no machinery extant, or even rigorously imaginable, that can sustain a single iota of attained value outside the forges of Hell.”

In the harsh new world of Malthusian industrialism, Idiocracy starts giving way to A Farewell to Alms, the eugenic fertility patterns that undergirded IQ gains in Early Modern Britain and paved the way to the industrial revolution. A few more centuries of the most intelligent and hard-working having more surviving grandchildren, and we will be back to where we are now today, capable of having a second stab at solving the intelligence problem but able to draw from a vastly bigger population for the task.

Assuming that a Tyranid hive fleet hadn’t gobbled up Terra in the intervening millennium.

Advocates: Anatoly Karlin

Age of Malthusian Industrialism series:

http://​​www.unz.com/​​akarlin/​​short-history-of-3rd-millennium/​​

http://​​www.unz.com/​​akarlin/​​where-do-babies-come-from/​​

http://​​www.unz.com/​​akarlin/​​breeders-revenge/​​

http://​​www.unz.com/​​akarlin/​​breeding-breeders/​​

http://​​www.unz.com/​​akarlin/​​the-geopolitics-of-the-age-of-malthusian-industrialism/​​

http://​​www.unz.com/​​akarlin/​​world-population/​​

Now that we finished listing out the scenarios, we can ask the question, who was right from 15-20 years ago now that things happened?

And the basic answer is

The non-singularitians were ultimately wrong, but we shouldn’t be too harsh due to hindsight biasing our estimates. That said, the development of Deep Learning managing to beat professional humans at Go was significant because it got investment by capitalists that gave billions of dollars to AI that infused things at the right time, and that money is usually far more stable than government money, essentially crushing AI winter issues once and for all. This is also a fairly continuous rather than discontinuous story.

But beyond stable money, why was it the right time? The answer is the bitter lesson by Richard Sutton, and basically that means compute, more than algorithms or instincts, matters for intelligence. And we finally have enough compute to actually simulate intelligence. Combine this with real money from capitalists, and things would explode exponentially fast. There’s another post which at least shows why we have failed routinely to get AGI with classical computers, but not now: We now can get very close to the Landauer limit, with perhaps an OOM more than the brain for 300 Watts on a personal computer, when all is said and done about the absolute limit. Here’s the link:

https://​​www.lesswrong.com/​​posts/​​xwBuoE9p8GE7RAuhd/​​brain-efficiency-much-more-than-you-wanted-to-know

While the most pessimistic conclusions are challenged pretty well in the comments to the point that I think they aren’t going to be too much of a barrier in practice, it is correct that the limits do essentially mean that slow takeoff or the Accelerating Change story was the correct model of growth in AI.

GOFAI failed because it required quantum computers to come pretty fast, since they are the only computers known to be reversible, circumventing Landauer’s Limit and instead operating on the Margolus-Levitin Limit, which is far more favorable to the Intelligence Explosion story. Unfortunately, Quantum computing is coming much slower than necessary to support the Intelligence Explosion thesis, so it now has mostly no longer worked.

But mostly didn’t work as a story didn’t mean it didn’t entirely work, and there does seem to be a threshold type effect that Intelligence Explosion predicts would happen, and as models scale in compute, they get from not being able to do something at all to doing something passably well. That is a threshold discontinuity, but crucially instead of being the entire process of intelligence amplification like I. J. Good and Eliezer Yudkowsky thought, it only lets AI make something possible that used to be unatomatable by AI be automatable, similar to how human language suddenly appeared in the Cognitive Revolution, but didn’t guarantee that they would dominate the world 69,800 years later in the Industrial Revolution, which was the start of a slow takeoff of humanity where humans over 2 centuries gradually seperate more and more from nature, culminating in the early 2010s AI-PONR gradually taking away what control nature and evolution has left.

Speaking of PONR...

https://​​www.lesswrong.com/​​users/​​daniel-kokotajlo uses PONR as the time period when AI risk-reducing efforts essentially become much less valuable due to AI Takeover being inevitable. Daniel Kokotajlo places the PONR in the 2020s, while I place it in the mid-2010s-Early 2020s. My PONR estimate is basically from the time that Go was demolished by AI to the Chinchilla scaling paper. That is my own PONR because once capitalism invests billions of dollars into something mature like AI, it’s a good predictor that it’s in fact important enough such that this will be adopted in time. It also addresses the AI winter issues because now there’s a stable source of funding for AGI that isn’t government fully funding AI research.

A final ode to the Event Horizon story. Event Horizon does appear to be correct in that even mild scaling like 3x chimp brain creates effectively an event horizon scenario, where chimps can’t even understand what a human can do, let alone try to replicate it in all but the most primitive details, and this gets worse once we exclude the most intelligent animals like primates, whales and some other groups of animals, where a worm doesn’t even begin to understand anything at all about the human, or even an orca or komodo dragon, far easier to understand, and yet they can’t understand. And this is why AI risk is so bad, because we will never understand what an AI can do, because the gap in intelligence is essentially more like the human-animal difference than the internal differences in human intelligence, which at least are bounded to at most 0.25x-1.9x the average. And when a more powerful group encounters a less powerful group, the default outcome is disastrous for the less powerful group. It’s not the default for a more powerful group not to treat the less powerful group badly instead of well.

Implications for AI Alignment researchers:

The biggest positive implication is that an AI research will take time to produce superintelligence, so we do have some time. We can’t waste that time, given the near-inevitability of AGI and superintelligence, but it does mean that by and large, the intelligence revolution will spin up slowly at the start, so we do have a chance to influence it. So AI Alignment needs more researchers and more funding right now.

Next, up to a point, it is far safer to do empirical research than theorists like MIRI like to think. In fact, we will need short feedback loops in order to make the best uses out of our time, and we need productive mistakes to get at least a semblance of a solution. Thus this post:

https://​​www.lesswrong.com/​​posts/​​vQNJrJqebXEWjJfnz/​​a-note-about-differential-technological-development

Is entirely the wrong way to go about things in an Accelerating Change world, where there isn’t a lot of sharp left turn discontinuities, and the ones that can be gotten won’t solve the entire problem either for AI, so that post is irrelevant.

The next stage of Alignment work is a world where MIRI transforms itself purely into an awareness center for AI risk, not as a research organization unto themselves anymore. The next stage of work looks a lot more like Ajeya Cotra’s sandwiching and empirical work, not Eliezer Yudkowsky’s MIRI team’s theoretical work.

One slight positive implication: At best, due to the Landauer limit, the general population will only have AGIs, and superintelligence will remain outside of individual hands for a long time to come. Thus, the scenario where a rogue person gets superintelligence in their basement is entirely impossible.

One negative implication is that the No Fire Alarm scenario is essentially the default condition. That’s because as Ray Kurzweil saw correctly, people’s models for change are linear when it’s actually exponential. Combine this with the fact that we are on the early slope of such a curve, and you can’t tell whether it will be a big deal or a small deal without serious thought. So the No Fire Alarm condition will continue until it’s too late. Here’s a link:

https://​​intelligence.org/​​2017/​​10/​​13/​​fire-alarm/​​

One final negative implication: Our brains don’t handle x-risk/​doomsday very well, or it’s inverse near-utopia and truth be told, there are no good answers for your mental health, because the situation your brain is in is so-far off-distribution ala Extremal Goodhart that an actual, tangible scenario of doom or near-utopia is not one it has been designed to handle very well. I unfortunately don’t have good answers here.