More generally, for the basic decision-making tools we have a collection (automatic application, automatic correction, deliberative application, deliberative correction). For goals, that’s (wanting, liking, approving, approving of approving); for beliefs, (anticipation, learning/surprise, professed belief, correspondence with referent (taskian truth)).
For example, correcting wrong belief in belief (professed belief) that doesn’t reflect more accurate anticipation then corresponds to getting rid of fake professed utility functions that don’t reflect the actual detailed values. Both are errors of mishandling the tools for deliberative reasoning (about beliefs and goals), and fixing both of these errors should in theory improve the quality of one’s decisions (or of theorizing about decision-making).
More generally, for the basic decision-making tools we have a collection (automatic application, automatic correction, deliberative application, deliberative correction). For goals, that’s (wanting, liking, approving, approving of approving); for beliefs, (anticipation, learning/surprise, professed belief, correspondence with referent (taskian truth)).
For example, correcting wrong belief in belief (professed belief) that doesn’t reflect more accurate anticipation then corresponds to getting rid of fake professed utility functions that don’t reflect the actual detailed values. Both are errors of mishandling the tools for deliberative reasoning (about beliefs and goals), and fixing both of these errors should in theory improve the quality of one’s decisions (or of theorizing about decision-making).