The greater a technology’s complexity, the more slowly it improves?

A new study by researchers at MIT and other institutions shows that it may be possible to predict which technologies are likeliest to advance rapidly, and therefore may be worth more investment in research and resources.

The researchers found that the greater a technology’s complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.

Link: nextbigfuture.com/​2011/​05/​mit-proves-that-simpler-systems-can.html

Might this also be the case for intelligence? Can intelligence be effectively applied to itself? To paraphrase the question:

  • If you increase intelligence, do you also decrease the distance between discoveries?

  • Does an increase in intelligence vastly outweigh its computational cost and the expenditure of time needed to discover it?

  • Would it be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal?

  • Do the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?

This reminds me of a post by Robin Hanson:

Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations.

Link: Is The City-ularity Near?

Of course, artificial general intelligence might differ in its nature from the complexity of cities. But do we have any evidence that hints at such a possibility?

Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason. That we can imagine a simple, all powerful principle of controlling everything in the world isn’t evidence for it existing.

Link: How far can AI jump?

(via Hard Takeoff Sources)