You expect that noisy ‘non-Bayesian’ exploration will yield greater success. If you are correct, then this is what the perfect Bayesian would expect as well. You seem to be thinking that a ‘rational’ agent needs to have some rationale or justification for pursuing some path of exploration, and this might lead it astray. Well, if it does that, it’s just stupid, and not a perfect Bayesian.
I don’t think you managed to establish that a perfect Bayesian would do worse than a human. But I think you hit upon an important point, that it is quite possible for the solutions in the search space to be so sparse, that no process whatsoever can reliably hit them to yield consistent recursive self-improvement.
If you are correct, then this is what the perfect Bayesian would expect as well.
I tried to say that being irrational aids discovery. If Bayesian equals winning then you are correct. Here is another example. If everyone was perfectly rational then a lot of explorations that unexpectedly yielded new insights would have never happened. That you say that a perfect Bayesian would expect this sounds like hindsight bias to me.
Yes, the way we define ‘perfect Bayesian’ is unfair, but is this really a problem?
I tried to say that being irrational aids discovery.
If discovery contributes to utility then our Bayesian (expected utility maximizer) will take note of this.
Here is another example.
You are here relying on a definition of rational which excludes being good at coordination problems.
this sounds like hindsight bias to me
Au contraire, I think our pride in our ‘irrationality’ is where the hindsight bias is! Like you said we got lucky. This is OK if our ‘luck’ were of the consistent type. But in all likelihood the way we have exposed ourselves to serendipity was suboptimal.
It’s entirely possible for our Bayesian to lose to you. It’s just improbable.
You expect that noisy ‘non-Bayesian’ exploration will yield greater success. If you are correct, then this is what the perfect Bayesian would expect as well. You seem to be thinking that a ‘rational’ agent needs to have some rationale or justification for pursuing some path of exploration, and this might lead it astray. Well, if it does that, it’s just stupid, and not a perfect Bayesian.
I don’t think you managed to establish that a perfect Bayesian would do worse than a human. But I think you hit upon an important point, that it is quite possible for the solutions in the search space to be so sparse, that no process whatsoever can reliably hit them to yield consistent recursive self-improvement.
So, one possible bottleneck they missed:
I tried to say that being irrational aids discovery. If Bayesian equals winning then you are correct. Here is another example. If everyone was perfectly rational then a lot of explorations that unexpectedly yielded new insights would have never happened. That you say that a perfect Bayesian would expect this sounds like hindsight bias to me.
Yes, the way we define ‘perfect Bayesian’ is unfair, but is this really a problem?
If discovery contributes to utility then our Bayesian (expected utility maximizer) will take note of this.
You are here relying on a definition of rational which excludes being good at coordination problems.
Au contraire, I think our pride in our ‘irrationality’ is where the hindsight bias is! Like you said we got lucky. This is OK if our ‘luck’ were of the consistent type. But in all likelihood the way we have exposed ourselves to serendipity was suboptimal.
It’s entirely possible for our Bayesian to lose to you. It’s just improbable.