[Links] The structure of exploration and exploitation

Inefficiencies are necessary for resilience:

Results suggest that when agents are dealing with a complex problem, the more efficient the network at disseminating information, the better the short-run but the lower the long-run performance of the system. The dynamic underlying this result is that an inefficient network maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run.

Introducing a degree of inefficiency so that the system as a whole has the potential to evolve:

Efficiency is about maximising productivity while minimising expense. Its something that organisations have to do as part of routine management, but can only safely execute in stable environments. Leadership is not about stability; it is about managing uncertainty through changing contexts.

That means introducing a degree of inefficiency so that the system as a whole has the potential to evolve. Good leaders generally provide top cover for mavericks, listen to contrary opinions and maintain a degree or resilience in the system as a whole.

Systems that eliminate failure, eliminate innovation:

Innovation happens when people use things in unexpected ways, or come up against intractable problems. We learn from tolerated failure, without the world is sterile and dies. Systems that eliminate failure, eliminate innovation.

Natural systems are highly effective but inefficient due to their massive redundancy:

Natural systems are highly effective but inefficient due to their massive redundancy (picture a tree dropping thousands of seeds). By contrast, manufactured systems must be efficient (to be competitive) and usually have almost no redundancy, so they are extremely vulnerable to breakage. For example, many of our modern industrial systems will collapse without a constant and unlimited supply of inexpensive oil.

I just came across those links here.

Might our “irrationality” and the patchwork-architecture of the human brain constitute an actual feature? Might intelligence depend upon the noise of the human brain?

A lot of progress is due to luck, in the form of the discovery of unknown unknowns. The noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to leave the path of evidence based exploration. A lot of discoveries were made by people pursuing “Rare Disease for Cute Kitten” activities.

How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?

My point is, what evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable explosive recursive self-improvement) over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?

There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:

  • Intelligence is goal-oriented.

  • Intelligence can think ahead.

  • Intelligence can jump fitness gaps.

  • Intelligence can engage in direct experimentation.

  • Intelligence can observe and incorporate solutions of other optimizing agents.

But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence does it take the discovery of novel unknown unknowns?

A basic argument supporting the risks from superhuman intelligence is that we don’t know what it could possible come up with. That is why we call it a ‘Singularity’. But why does nobody ask how it knows what it could possible come up with?

It is argued that the mind-design space must be large if evolution could stumble upon general intelligence. I am not sure how valid that argument is, but even if that is the case, shouldn’t the mind-design space reduce dramatically with every iteration and therefore demand a lot more time to stumble upon new solutions?

An unquestioned assumption seems to be that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries. Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns. But who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns. I just don’t see that to be a reasonable assumption.

It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.