Misconceptions about continuous takeoff

There has been considerable debate over whether development in AI will experience a discontinuity, or whether it will follow a more continuous growth curve. Given the lack of consensus and the confusing, diverse terminology, it is natural to hypothesize that much of the debate is due to simple misunderstandings. Here, I seek to dissolve some misconceptions about the continuous perspective, based mostly on how I have seen people misinterpret it in my own experience.

First, we need to know what I even mean by continuous takeoff. When I say it, I mean a scenario where the development of competent, powerful AI follows a trajectory that is roughly in line with what we would have expected by extrapolating from past progress. That is, there is no point at which a single project lunges forward in development and creates an AI that is much more competent than any other project before it. This leads to the first clarification,

Continuous doesn’t necessarily mean slow

The position I am calling “continuous” has been called a number of different names over the years. Many refer to it as “slow” or “soft.” I think continuous is preferable to these terms because it focuses attention on the strategically relevant part of the question. It seems to matter less what the actual clock-time is from AGI to superintelligence, and instead matters more if there are will be single projects who break previous technological trends and gain capabilities that are highly unusual relative to the past.

Moreover, there are examples of rapid technological developments that I consider to be continuous. As an example, consider GANs. In 2014, GANs were used to generate low quality black-and-white photos of human faces. By late 2018, they were used to create nearly-photorealistic images of human faces.

Yet, at no point during this development did any project leap forward by a huge margin. Instead, every paper built upon the last one by making minor improvements and increasing the compute involved. Since these minor improvements nonetheless happened rapidly, the result is that the GANs followed a fast development relative to the lifetimes of humans.

Extrapolating from this progress, we can assume that GAN video generation will follow a similar trajectory, starting with simple low resolution clips, and gradually transitioning to the creation of HD videos. What would be unusual is if someone right now in late 2019 produces some HD videos using GANs.

Large power differentials can still happen in a continuous takeoff

Power differentials between nations, communities, and people are not unusual in the course of history. Therefore, the existence of a deep power differential caused by AI would not automatically imply that a discontinuity has occurred.

In a continuous takeoff, a single nation or corporation might still pull ahead in AI development by a big margin and use this to their strategic advantage. To see how, consider how technology in the industrial revolution was used by western European nations to conquer much of the world.

Nations rich enough to manufacture rifles maintained a large strategic advantage over those unable to. Despite this, the rifle did not experience any surprising developments which catapulted it to extreme usefulness, as far as I can tell. Instead, sharpshooting became gradually more accurate, with each decade producing slightly better rifles.

See also: Soft takeoff can still lead to decisive strategic advantage

Continuous takeoff doesn’t require believing that ems will come first

This misconception seems to mostly be a historical remnant of the Hanson-Yudkowsky AI-Foom debate. In the old days, there weren’t many people actively criticizing foom. So, if you disagreed with foom, it was probably because you were sympathetic to Hanson’s views.

There are now many people who disagree with foom who don’t take Hanson’s side. Paul Christiano and AI Impacts appear somewhat at the forefront of this new view.

Recursive self-improvement is compatible with continuous takeoff

In my experience, recursive self improvement is one of the main reasons cited for why we should expect a discontinuity. The validity of this argument is far from simple, but needless to say: folks who subscribe to continuous takeoff aren’t simply ignoring it.

Consider I.J. Good’s initial elaboration of recursive self improvement,

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind.

The obvious interpretation from the continuous perspective is that by the time we have an ultraintelligent machine, we’ll already have a not-quite-ultraintelligent machine. Therefore, the advantage that an ultraintelligent machine will have over the collective of humanity + machines will be modest.

It is sometimes argued that even if this advantage is modest, the growth curves will be exponential, and therefore a slight advantage right now will compound to become a large advantage over a long enough period of time. However, this argument by itself is not an argument against a continuous takeoff.

Exponential growth curves are common for macroeconomic growth, and therefore this argument should apply equally to any system which experiences a positive feedback loop. Furthermore, large strategic advantages do not automatically constitute a discontinuity since they can still happen even if no project surges forward suddenly.

Continuous takeoff is relevant to AI alignment

The misconception here is something along the lines of, “Well, we might not be able to agree about AI takeoff, but at least we can agree that AI safety is extremely valuable in either case.” Unfortunately, the usefulness of many approaches to AI alignment appear to hinge quite a bit on continuous takeoff.

Consider the question of whether an AGI would defect during testing. The argument goes that an AI will have an instrumental reason to pretend to be aligned while weak, and then enter a treacherous turn when it is safe from modification. If this phenomenon ever occurs, there are two distinct approaches we can take to minimize potential harm.

First, we could apply extreme caution and try to ensure that no system will ever lie about its intentions. Second, we could more-or-less deal with systems which defect as they arise. For instance, during deployment we could notice that some systems are optimizing something different than what we intended during training, and therefore we shut them down.

The first approach is preferred if you think that there will be a rapid capability gain relative the rest of civilization. If we deploy an AI and it suddenly catapults to exceptional competence, then we don’t really have a choice other than to get its values right the first time.

On the other hand, under a continuous takeoff, the second approach seems more promising. Each individual system won’t by themselves carry more power than the sum of projects before it. Instead, AIs will only be slightly better than the ones that came before it, including any AIs we are using to monitor the newer ones. Therefore, to the extent that the second approach carries a risk, it will probably look less like a sudden world domination and will look more like a bad product rollout, in line with say, the release of Windows Vista.

Now, obviously there are important differences between current technological products and future AGIs. Still, the general strategy of “dealing with things as they come up” is much more viable under continuous takeoff. Therefore, if a continuous takeoff is more likely, we should focus our attention on questions which fundamentally can’t be solved as they come up. This is a departure from the way that many have framed AI alignment in the past.