what is someone who wishes to be rational supposed to do, when the underlying hardware simply won’t cooperate?
Being aware of the biases, yet unable to adapt your reasoning to compensate, seems to be contradictory. When you say “I know I only think X because of bias Y, so my actual belief should be Z”, you seem to already have solved the problem in that instance, by just switching them out (in lambda calculus: E[X:=Z]).
The unknown unknowns are in my opinion the crux of the problem: those biases you did not (yet) recognize in specific situations, regardless of how well you trained yourself to reflect upon your own reasoning. Due to the nature of the problem, we wouldn’t even be aware of how much progress we made in recognizing biases, and how much is left to be done. (Comparing the variance among reasoning agents would help: Based on Aumann, we can in principle eliminate—or at least notice the existence of—biases that we do not share, but two agents with a shared kind of bias would still converge on the same belief and thus be oblivious to it*).
What do? Do the best with the hand dealt to you, e.g. if it were the case (as a cosmic joke) that Occam’s Razor didn’t hold true for vetting ToE’s after all, too bad. At least we did our very best then.
* I’m not certain this is a formal result, it should be the case for a majority of cases. Comments welcome.
“I know I only think X because of bias Y, so my actual belief should be Z”, you seem to already have solved the problem in that instance, by just switching them out (in lambda calculus: E[X:=Z]).
You’d think that should be the case, but beliefs don’t actually work like that. You may believe that you believe Z, but you’ll still behave as if you believe X. It’s possible to override belief X, but it’s not as easy as simply recognizing that you should override (or at least, that’s the case in my experience. Yours may vary).
Chorus: Hi, Brent.
Being aware of the biases, yet unable to adapt your reasoning to compensate, seems to be contradictory. When you say “I know I only think X because of bias Y, so my actual belief should be Z”, you seem to already have solved the problem in that instance, by just switching them out (in lambda calculus: E[X:=Z]).
The unknown unknowns are in my opinion the crux of the problem: those biases you did not (yet) recognize in specific situations, regardless of how well you trained yourself to reflect upon your own reasoning. Due to the nature of the problem, we wouldn’t even be aware of how much progress we made in recognizing biases, and how much is left to be done. (Comparing the variance among reasoning agents would help: Based on Aumann, we can in principle eliminate—or at least notice the existence of—biases that we do not share, but two agents with a shared kind of bias would still converge on the same belief and thus be oblivious to it*).
What do? Do the best with the hand dealt to you, e.g. if it were the case (as a cosmic joke) that Occam’s Razor didn’t hold true for vetting ToE’s after all, too bad. At least we did our very best then.
* I’m not certain this is a formal result, it should be the case for a majority of cases. Comments welcome.
You’d think that should be the case, but beliefs don’t actually work like that. You may believe that you believe Z, but you’ll still behave as if you believe X. It’s possible to override belief X, but it’s not as easy as simply recognizing that you should override (or at least, that’s the case in my experience. Yours may vary).