To talk about an intelligence explosion, one has to know what one means by “intelligence” as well as by “explosion”. So it’s worth reflecting that there are currently no measures of general intelligence that are precise, objectively defined and broadly extensible beyond the human scope. However, since “intelligence explosion” is a qualitative concept, we believe the commonsense qualitative understanding of intelligence suffices.
Some people probably stopped reading after that. Intelligence might very well depend upon the noise of the human brain. A lot of progress is due to luck, in the form of the discovery of unknown unknowns. Intelligence is a goal-oriented evolutionary process equipped with a memory. It is evolutionary insofar as it still needs to stumble upon novelty. Intelligence is not a meta-solution but an efficient searchlight that helps to discover unknown unknowns. Intelligence is also a tool that can efficiently exploit previous discoveries, combine and permute them. But claiming that you just have to be sufficiently intelligent to solve a given problem sounds like it is more than that. I don’t see that. I think that if something crucial is missing, something you don’t know that it is missing, you’ll have to discover it first and not invent it by the sheer power of intelligence. And here the noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to follow routes that no rational, perfectly Bayesian agent would take because there exist no prior evidence to do so. The complexity of human values might very well be key-feature of our success. There is no evidence that intelligence is fathomable as a solution that can be applied to itself effectively.
You expect that noisy ‘non-Bayesian’ exploration will yield greater success. If you are correct, then this is what the perfect Bayesian would expect as well. You seem to be thinking that a ‘rational’ agent needs to have some rationale or justification for pursuing some path of exploration, and this might lead it astray. Well, if it does that, it’s just stupid, and not a perfect Bayesian.
I don’t think you managed to establish that a perfect Bayesian would do worse than a human. But I think you hit upon an important point, that it is quite possible for the solutions in the search space to be so sparse, that no process whatsoever can reliably hit them to yield consistent recursive self-improvement.
If you are correct, then this is what the perfect Bayesian would expect as well.
I tried to say that being irrational aids discovery. If Bayesian equals winning then you are correct. Here is another example. If everyone was perfectly rational then a lot of explorations that unexpectedly yielded new insights would have never happened. That you say that a perfect Bayesian would expect this sounds like hindsight bias to me.
Yes, the way we define ‘perfect Bayesian’ is unfair, but is this really a problem?
I tried to say that being irrational aids discovery.
If discovery contributes to utility then our Bayesian (expected utility maximizer) will take note of this.
Here is another example.
You are here relying on a definition of rational which excludes being good at coordination problems.
this sounds like hindsight bias to me
Au contraire, I think our pride in our ‘irrationality’ is where the hindsight bias is! Like you said we got lucky. This is OK if our ‘luck’ were of the consistent type. But in all likelihood the way we have exposed ourselves to serendipity was suboptimal.
It’s entirely possible for our Bayesian to lose to you. It’s just improbable.
Some people probably stopped reading after that. Intelligence might very well depend upon the noise of the human brain. A lot of progress is due to luck, in the form of the discovery of unknown unknowns. Intelligence is a goal-oriented evolutionary process equipped with a memory. It is evolutionary insofar as it still needs to stumble upon novelty. Intelligence is not a meta-solution but an efficient searchlight that helps to discover unknown unknowns. Intelligence is also a tool that can efficiently exploit previous discoveries, combine and permute them. But claiming that you just have to be sufficiently intelligent to solve a given problem sounds like it is more than that. I don’t see that. I think that if something crucial is missing, something you don’t know that it is missing, you’ll have to discover it first and not invent it by the sheer power of intelligence. And here the noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to follow routes that no rational, perfectly Bayesian agent would take because there exist no prior evidence to do so. The complexity of human values might very well be key-feature of our success. There is no evidence that intelligence is fathomable as a solution that can be applied to itself effectively.
You expect that noisy ‘non-Bayesian’ exploration will yield greater success. If you are correct, then this is what the perfect Bayesian would expect as well. You seem to be thinking that a ‘rational’ agent needs to have some rationale or justification for pursuing some path of exploration, and this might lead it astray. Well, if it does that, it’s just stupid, and not a perfect Bayesian.
I don’t think you managed to establish that a perfect Bayesian would do worse than a human. But I think you hit upon an important point, that it is quite possible for the solutions in the search space to be so sparse, that no process whatsoever can reliably hit them to yield consistent recursive self-improvement.
So, one possible bottleneck they missed:
I tried to say that being irrational aids discovery. If Bayesian equals winning then you are correct. Here is another example. If everyone was perfectly rational then a lot of explorations that unexpectedly yielded new insights would have never happened. That you say that a perfect Bayesian would expect this sounds like hindsight bias to me.
Yes, the way we define ‘perfect Bayesian’ is unfair, but is this really a problem?
If discovery contributes to utility then our Bayesian (expected utility maximizer) will take note of this.
You are here relying on a definition of rational which excludes being good at coordination problems.
Au contraire, I think our pride in our ‘irrationality’ is where the hindsight bias is! Like you said we got lucky. This is OK if our ‘luck’ were of the consistent type. But in all likelihood the way we have exposed ourselves to serendipity was suboptimal.
It’s entirely possible for our Bayesian to lose to you. It’s just improbable.