[Question] Why do People Think Intelligence Will be “Easy”?

In many discussions I’ve hard with people around AI takeoff, I’ve frequently encountered a belief that intelligence is going to be “easy”.

To quantify what I mean by “easy”, many people seem to expect that the marginal returns on cognitive investment (whether via investment of computational resources, human intellectual labour, cognitive reinvestment or general economic resources) will not be diminishing, or will diminish gracefully (i.e. at a sublinear rate).

(Specifically marginal returns around and immediately beyond the human cognitive capability frontier.)

I find this a bit baffling/​absurd honestly. My default intuitions lean towards marginal returns to cognitive reinvestment diminishing at a superlinear rate, and my arm chair philosophising so far (thought experiments and general thinking around the issue) seem to support this intuition.

A couple intuition pumps:

  • 50%, 75%, 87.5%, 93.75%, … are linear jumps in predictive accuracy (one bit each), but returns seem to diminish at an exponential rate

    • On the other hand 6.25%, 12.5%, 25%, 50% represent the same linear jumps, but this time with returns growing at an exponential returns

    • This suggests that the nature of returns to cognitive investment might exhibit differing behaviour depending on where in the cognitive capabilities curve you are

      • Though I’ve not yet thought about how this behaviour generalises to other aspects of cognition separate from predictive accuracy

  • Apriori, I’d expect that given how much human dominance is almost entirely dependent on our (collective) intelligence, our evolution would have selected strongly for intelligence until it met diminishing returns to higher intelligence

Another way to frame my question this is that there will obviously be diminishing marginal returns (perhaps superlinearly diminishing) at some point; why are we confident that point is far in front of the peak of human intelligence?