Again, this changes nothing. In this case you will have to calculate the expected utility of using your intuition. Which seems just as impossible to me.
I totally agree that it’s impossible exactly. So people use approximations everywhere. The trigger for the habit is thinking something like “Moving to California is a big decision.” Then you think “Is there a possibility for a big gain if I use more deliberative reasoning?” Then, using a few heuristics, you may answer “yes.” And so on, approximating at every step, since that’s the only way to get anything done.
Hm, that seems to be more in the context of “patching over” ideas that are mostly right but have some problems. I’m talking about “fixing” theories that are exactly right but impossible to apply.
One of the more interesting experiences I’ve had learning about physics is how much of our understanding of physics is a massive oversimplification, because it’s just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.
I totally agree that it’s impossible exactly. So people use approximations everywhere. The trigger for the habit is thinking something like “Moving to California is a big decision.” Then you think “Is there a possibility for a big gain if I use more deliberative reasoning?” Then, using a few heuristics, you may answer “yes.” And so on, approximating at every step, since that’s the only way to get anything done.
You mean something along the lines of what I have written here?
Hm, that seems to be more in the context of “patching over” ideas that are mostly right but have some problems. I’m talking about “fixing” theories that are exactly right but impossible to apply.
One of the more interesting experiences I’ve had learning about physics is how much of our understanding of physics is a massive oversimplification, because it’s just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.