The Hard Intelligence Hypothesis and Its Bearing on Succession Induced Foom

Disclaimer

All notations and formalisms used in this post are outdated and strictly superseded by the notations and formalisms used in “Towards a Formalisation of Returns on Cognitive Reinvestment (Part 1)”. I will come back one day and unify the notation, but I’m not sure how to do that without making this post too unwieldy.


Introduction

Consider that intelligence may be hard (let’s call this the “hard intelligence hypothesis” or HIH). I suspect that it is, I think there’s a lot of empirical evidence for it.

However, for the purpose of this post. I don’t want to explore the evidence, intuitions or arguments underpinning a belief in HIH. I would simply like to speculate on how HIH might bear on recursive self-improvement, and particularly on foom arising from recursive self-improvement.

I think knowing better the “hardness” of intelligence would better inform our position on takeoff dynamics.

A Pre Formal Definition of HIH

My current working definition of HIH is something like:

Returns on intelligence scale sublinearly with resources expended to increase intelligence.

Intelligence is itself just another resource that can be expended to increase intelligence. That particular case is what I will be considering here.

The “hardness” of intelligence with respect to a given resource can be thought of as how sublinear the curve is (perhaps what complexity class it belongs to?).

What is Recursive Self Improvement?

The basic story of recursive self improvement is that above a given level of intelligence (e.g. “par human”), the AI will be able to inspect and modify its own architecture/​algorithms, to directly become more intelligent or design such a successor.

It doesn’t particularly matter whether the system directly self modifies or creates a successor. Self-modification can be understood as a particular case of creating a successor. So for the rest of this post, I’ll look only at the case of successors.

Interlude

Unless otherwise specified, all variables that appear in LaTeX should be understood to be positive real numbers.

A Semi-Formal Description of Succession

Consider an agent , let the successor agent that creates be denoted .

Let be the first agent intelligent enough to design successor agents.

Posit also that we have a linear, additive measure of “intelligence”. We’ll denote this measure by . I’ll denote an agent with as such:

To explain what I mean by “linear additive measure”, suppose we have 2 tuples of agents: . Then intra-pair difference in capability is the same.

For convenience’s sake, let be the intelligence of .

So the story of recursive self-improvement is that instantiates a successor .

is even more intelligent (and thus better at creating successor agents) and so instantiates an even more capable successor .

The process continues until we approach saturation or the fundamental limits of intelligence (which may well be far beyond the human domain).

The Assumptions Underlying Fast Takeoff

It does not follow from succession that takeoff would be fast or hard. The idea that succession leads to fast takeoff/​”foom” rests in an/​a few (implicit?) assumptions that I am *very* sceptical of:

  1. That e.g. will have an “easier” time creating than had creating .

  2. That it’s easier for to instantiate a successor than for to instantiate a successor (assuming is far from the upper limit of intelligent agents).

“Ease” here can be understood to refer to resources expended in the endeavour:

  • Compute

  • Time

  • Data

  • Etc.

We can probably aggregate all of the above into the total financial cost of the endeavour. More capable agents may have more resources available, so perhaps the best metric is something like:

The earlier beliefs lead to the below beliefs:

  1. The time between successive generations of agents will shorten.
    (Perhaps it takes 6 months to move from to , but only a month to move from to [again assuming A_8 is far from the upper limits])

  2. The gap in capability between successive agents will widen.
    (Perhaps the difference in capability between and is but between and it’s only )

If at least one of the above two statements is true, then succession leads to a fast/​hard takeoff.

What if Intelligence Was Hard?

What if the above assumptions were wrong? Not just wrong, but the exact *opposite* of what is actually the case?

What if as you move to higher levels of intelligence, it becomes *much* harder to eke out additional performance?

I.e. if you graph improvement (difference between of successor and of parent) on the y-axis against the of the parent agent on the x-axis (holding “effort” constant), the result is a very sublinear curve (as opposed to superlinear if the assumptions were correct).

Consequences of HIH for the design of successor agents:

  1. The time between successive generations of agents will lengthen progressively.
    (Perhaps it takes 1 month to move from to , but 6 months to move from to )

  2. Gap in capability between successive agents will narrow.
    (Perhaps the difference in capability between and is but between and it’s only )

(Again assuming is far from the upper limits)

Note that it follows from HIH that succession does not imply hard or fast takeoff scenarios.

Yes, there’ll eventually be a vastly superhuman successor agent ( for large enough ), but it’ll take a “long time” to get there.

HIH actually implies a slow takeoff. That is, it’ll take (much?) longer to transition from par human systems to superhuman systems than it did to reach par human systems from near-human systems.

Preliminary Conclusions

Thus the “hardness” of intelligence is a very important area of inquiry for determining takeoff dynamics from succession (and in general when hardness with respect to other resources is considered).

I think in worlds where intelligence is sufficiently hard, fast takeoff (via any route) is outright infeasible (all resources expended offer sublinear returns, and some curves are more sublinear than others).

Closing Thoughts

I would appreciate it if those making the claim that succession can induce foom could argue convincingly that intelligence is insufficiently hard.

That if you graphed improvement on the y-axis against on the x-axis, there’s a relevant non-trivial interval in which the graph is superlinear.

That’s what is required for succession induced foom to be feasible.

I do not recall the original arguments for succession induced foom grappling with the hardness of intelligence at all (however, I originally read them 3 − 5 years ago and may be misremembering them). They seemed to just take it for granted that it was “easy”.

HIH is a strong intuition of mine. I would need strong evidence (including compelling arguments) to shift away from it.

However, it’s not something I can demonstrate or prove definitively. Perhaps in subsequent posts, I’ll explore at length why I think intelligence is hard.