To some extent, people seem to be underconfident in what they’re willing to say they saw (though I couldn’t find a gambling study for inattentional blindness to identify whether people asked to bet on their confidence level can accurately gauge “how much” they’re perceiving.)
And overconfident in what they’re willing to say they didn’t perceive, usually.
I bet people would learn pretty quickly if they had some iterations of being paid for accuracy, but most people are running on faulty self models where “noticing the gorilla” and “noticing that they have noticed the gorilla” are the same thing, so the strength of the latter gets conflated as evidence and you get “I definitely didn’t notice the gorilla, because by definition if I did I’d have noticed that I did!”.
In order to start approaching accurate calibration you have to start looking for signs of your own processing similar to how those researchers did. It’s not enough to notice that “gorilla” seems more salient than normal, since you also have to realize that something so “random” actually has a quite high likelihood of being caused by something rather than being truly random, even though you have very weak awareness at best of having seen it.
Arguments are a pretty easy way to induce this sort of “generalized blind sight”. People often don’t like admitting that they were wrong, so there’s motivation to not notice what they’ve noticed.
And overconfident in what they’re willing to say they didn’t perceive, usually.
I bet people would learn pretty quickly if they had some iterations of being paid for accuracy, but most people are running on faulty self models where “noticing the gorilla” and “noticing that they have noticed the gorilla” are the same thing, so the strength of the latter gets conflated as evidence and you get “I definitely didn’t notice the gorilla, because by definition if I did I’d have noticed that I did!”.
In order to start approaching accurate calibration you have to start looking for signs of your own processing similar to how those researchers did. It’s not enough to notice that “gorilla” seems more salient than normal, since you also have to realize that something so “random” actually has a quite high likelihood of being caused by something rather than being truly random, even though you have very weak awareness at best of having seen it.
Arguments are a pretty easy way to induce this sort of “generalized blind sight”. People often don’t like admitting that they were wrong, so there’s motivation to not notice what they’ve noticed.