My main point regarding the advantage of being “irrational” was that if we would all think like perfect rational agents, e.g. closer to how Eliezer Yudkowsky thinks, we would have missed out on a lot of discoveries that were made by people pursuing “Rare Disease for Cute Kitten” activities.
How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?
My main point regarding the advantage of being “irrational” was that if we would all think like perfect rational agents, e.g. closer to how Eliezer Yudkowsky thinks, we would have missed out on a lot of discoveries that were made by people pursuing “Rare Disease for Cute Kitten” activities.
How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?