Yes, these “hidden goals” and “hidden questions” of descriptive idealization have predictive power, behavior clusters around them. The (potential) error is in implying that these hold a more fundamental role, existence as actual goals/questions/beliefs, that their properties contradict those of their (more idealistic) loosely-connected counterparts in a normative idealization. People behaving in a way that ignores more direct signals of healthcare quality and instead pursuing healthcare providers’ prestige doesn’t easily contradict people normatively caring about quality of healthcare.
Sure, all it tells us is that the signal we evolved to extract from the environment when worrying about healthcare is related to credential. That was probably a great way to actually solve healthcare problems in various time periods past. If you really do care about healthcare, and the environment around you affords a low-cost signal in the form of credential that correlates to better healthcare, then you’ll slowly adopt that policy or die; higher-cost signals might yield better healthcare but at the expense of putting yourself at a disadvantage compared to competitors using the low-cost signal.
When I hear the term ‘hidden goal’ in these models, I generally substitute “goal that would have correctly yielded the desired outcome in less data-rich environments.” I agree it is misleading to tout some statement like, “look how foolish people are because they care more about credential than about the real data behind doctors’ successes or treatments’ survival rates.” But I also don’t think Hanson or Kahneman are saying anything like that. I think they are saying, “Look at how unfortunate our intrinsic, evolved signal-processing machinery is. What worked great when the best you could do was hope to live to the age of 30 as a hunter gatherer turns out not to really be that great at all when you state more explicit goals tied to the data. Gee, if we could be more aware of the residual hunter-gatherer mechanisms that produced these cognitive artifacts, maybe we could correct for them or take advantage of them in some useful way.” Perhaps “vestigial goals” is a better term for what Hanson calls “hidden goals.”
Yes, these “hidden goals” and “hidden questions” of descriptive idealization have predictive power, behavior clusters around them. The (potential) error is in implying that these hold a more fundamental role, existence as actual goals/questions/beliefs, that their properties contradict those of their (more idealistic) loosely-connected counterparts in a normative idealization. People behaving in a way that ignores more direct signals of healthcare quality and instead pursuing healthcare providers’ prestige doesn’t easily contradict people normatively caring about quality of healthcare.
Sure, all it tells us is that the signal we evolved to extract from the environment when worrying about healthcare is related to credential. That was probably a great way to actually solve healthcare problems in various time periods past. If you really do care about healthcare, and the environment around you affords a low-cost signal in the form of credential that correlates to better healthcare, then you’ll slowly adopt that policy or die; higher-cost signals might yield better healthcare but at the expense of putting yourself at a disadvantage compared to competitors using the low-cost signal.
When I hear the term ‘hidden goal’ in these models, I generally substitute “goal that would have correctly yielded the desired outcome in less data-rich environments.” I agree it is misleading to tout some statement like, “look how foolish people are because they care more about credential than about the real data behind doctors’ successes or treatments’ survival rates.” But I also don’t think Hanson or Kahneman are saying anything like that. I think they are saying, “Look at how unfortunate our intrinsic, evolved signal-processing machinery is. What worked great when the best you could do was hope to live to the age of 30 as a hunter gatherer turns out not to really be that great at all when you state more explicit goals tied to the data. Gee, if we could be more aware of the residual hunter-gatherer mechanisms that produced these cognitive artifacts, maybe we could correct for them or take advantage of them in some useful way.” Perhaps “vestigial goals” is a better term for what Hanson calls “hidden goals.”