It uses bounded rationality, not just because that’s what we evolved, but because heuristics, probabilistic logic and rational ignorance have a higher marginal cost efficiency (the improvements in decision making don’t produce a sufficient gain to outweigh the cost of the extra thinking).
I’m not sure about this. I think we use bounded rationality because that’s the only kind that can physically exist in the universe. You seem to be making the stronger statement that we’re near-optimal in terms of rationality—does this mean that Less Wrong can’t work?
Thank you for the feedback. Most appreciated. I’ve corrected the links you mentioned.
Perhaps a clearer example of what I mean with respect to bounded rationality would be in computing where, when faced with a choice between two algorithms, the first of which is provably correct and never fails, and the second of which can fail sometimes but rarely, the optimal decision is to pick the latter. An example of this is UUIDs—they can theoretically collide but, in practice, are very very unlikely to do so.
My point is that we shouldn’t assume AIs will even try to be as logical as possible. They may, rather, try only to be as logical as is optimal for achieving their purposes.
I don’t intend to claim that humans are near optimal. I don’t know. I have insufficient information. It seems likely to me that what we were able to biologically achieve so far is the stronger limit. I merely meant that, even were that limitation removed (by, for example, brains becoming uploadable), additional limits also exist.
Thank you for the feedback. Most appreciated. I’ve corrected the links you mentioned.
Perhaps a clearer example of what I mean with respect to bounded rationality would be in computing where, when faced with a choice between two algorithms, the first of which is provably correct and never fails, and the second of which can fail sometimes but rarely, the optimal decision is to pick the latter. An example of this is UUIDs—they can theoretically collide but, in practice, are very very unlikely to do so.
My point is that we shouldn’t assume AIs will even try to be as logical as possible. They may, rather, try only to be as logical as is optimal for achieving their purposes.
I don’t intend to claim that humans are near optimal. I don’t know. I have insufficient information. It seems likely to me that what we were able to biologically achieve so far is the stronger limit. I merely meant that, even were that limitation removed (by, for example, brains becoming uploadable), additional limits also exist.