Someone who felt uncomfortable with Feynman’s bluntness and wanted to believe that there’s no conflict between rationality and social graces might argue that Feynman’s “simple proposition” is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, “No, it’s not going to work”, was not Feynman implicitly asserting that just because he couldn’t see a way to make it work, it simply couldn’t? …
While not entirely without merit (it’s true that the map is not the territory; it’s true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics
Here’s something I wrote earlier today: “I thought transactions wouldn’t cause “wait for lock” unless requested explicitly, and I don’t think we request it explicitly. But maybe I’m wrong there?”
I don’t fully remember my epistemic state at the time, but I think I was pretty confident on both counts. But as it happens, I was wrong on the first count. This is the crucial piece of information we needed to understand what we were investigating.
I can imagine that I might instead have written “transactions won’t cause “wait for lock” unless requested explicitly, and we don’t request it explicitly”. I think writing that would have been worse for me and worse for my team, because someone reading it would have been less likely to double check the wrong thing that I believed. (I don’t know if in reality my colleague who found the problem read my message and thought “what? Yes they will”, or “oh? that sounds wrong to me”, or “hm, I guess that’s something to check” or maybe just didn’t see my message at all. But I think it’s unlikely-but-plausible that the more-confident version of my message could have cost several hours of debugging time.)
Would you say that in choosing to write the less-confident thing instead of the more-confident thing, I was distracting myself from worrying about the sql? I think that would be kind of a weird thing to say, but perhaps defensible. But in any case I was worrying about saying true things, and I think that matters too.
(A Feynman quote that seems relevant: “The first principle is not to fool yourself – and you are the easiest person to fool.”)
Thanks for commenting! I agree that it’s good to communicate one’s uncertainty when one is uncertain. (From a certain perspective, it’s unfortunate that our brains and culture aren’t set up to do this in a particularly nuanced way; we only know how to say “X” and “I think X” rather than sharing likelihood ratios.) Perhaps read the second half of this post as expressing anxiety about tone-policing of confident-sounding language being used for social status regulation rather than to optimize communication of actual uncertainty?
Nod, but then perhaps that part isn’t saying “lack-of-grace is a virtue” so much as “a certain kind of criticism of lack-of-grace is a vice”? (I haven’t reread with this possibility in mind.)
In any case, I think I’m fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.
I suppose you can say “if someone routinely talks with unjustified confidence, then eventually they’ll be wrong, and they can take the status hit then?” But
I think we can update faster than that. E.g. I recall Scott Adams said Trump would win 2016 with 99% probability or something? Trump did win, but I’m still comfortable judging this as overconfident without looking at his forecasting track record. (Though if someone were to look at his track record and found that he was well calibrated, I guess I’d have to be less comfortable.)
Often we never really learn the answer, e.g. with counterfactuals (“if the weather had been 3° colder that day, Hillary would have won”) or claims about what’s inside someone’s head (“they claim to sincerely believe X, but they obviously are just saying that to avoid censure”). “Is this confidence justified” is another example here.
The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes, because sometimes social grace calls for obfuscating shared maps.
Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.
Accuracy of shared maps is quantitative. A culture that’s optimized for social grace isn’t going to make people wrong about everything, and could make people less wrong about many things relative to many less graceful alternative cultures. (At minimum, if you’re not allowed be confident, you can’t be overconfident; if you’re not allowed to talk about what’s inside someone’s head, you can’t be wrong about what’s inside someone’s head.)
Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.
This sounds like it’s contrasting “criticism for being unjustified” against “criticism for social status regulation”. But those aren’t the same use of the word “for”, much like it would be weird to contrast “locking someone up for murder” against “locking someone up for deterrence”. (Though “for deterrence” might be a different “for” again, I’m not sure.)
To unpack, when I said
I think I’m fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.
I didn’t intend to support someone being like “I want to do some social status regulation and I’m going to do it by tone policing some unjustified confidence”. I meant to support “this is unjustified confidence, I want less of this and to that end I’m going to do some social status regulation through the mechanism of tone policing”. I can’t tell if you’re yay-that or boo-that.
I guess that when you said
Perhaps read the second half of this post as expressing anxiety about tone-policing of confident-sounding language being used for social status regulation rather than to optimize communication of actual uncertainty?
I basically ignored the “rather than...” and thought you were just opposed to tone-policing of confident sounding language in general. And the reason I did that might be that in my head, it’s surprising to talk about “tone policing for social status regulation, rather than tone policing to optimize communication”; rather, I’d expect to talk about “tone policing for social status regulation, in order to optimize communication”.
Scott Adams predicted Trump would win in a landslide. He wasn’t just overconfident, he was wrong! The fact that he’s not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google ‘Scott Adams Trump prediciton’ in Incognito, the first two results say “landslide” in the first ten seconds and title, respectively).
Your first case is an example of something much worse than not updating fast enough.
Thanks for the correction! Bad example on my part then.
My guess is that the point is clear and fairly undisputed, and coming up with an actually correct example wouldn’t be very helpful. Still a little embarrassing.
Here’s something I wrote earlier today: “I thought transactions wouldn’t cause “wait for lock” unless requested explicitly, and I don’t think we request it explicitly. But maybe I’m wrong there?”
I don’t fully remember my epistemic state at the time, but I think I was pretty confident on both counts. But as it happens, I was wrong on the first count. This is the crucial piece of information we needed to understand what we were investigating.
I can imagine that I might instead have written “transactions won’t cause “wait for lock” unless requested explicitly, and we don’t request it explicitly”. I think writing that would have been worse for me and worse for my team, because someone reading it would have been less likely to double check the wrong thing that I believed. (I don’t know if in reality my colleague who found the problem read my message and thought “what? Yes they will”, or “oh? that sounds wrong to me”, or “hm, I guess that’s something to check” or maybe just didn’t see my message at all. But I think it’s unlikely-but-plausible that the more-confident version of my message could have cost several hours of debugging time.)
Would you say that in choosing to write the less-confident thing instead of the more-confident thing, I was distracting myself from worrying about the sql? I think that would be kind of a weird thing to say, but perhaps defensible. But in any case I was worrying about saying true things, and I think that matters too.
(A Feynman quote that seems relevant: “The first principle is not to fool yourself – and you are the easiest person to fool.”)
Thanks for commenting! I agree that it’s good to communicate one’s uncertainty when one is uncertain. (From a certain perspective, it’s unfortunate that our brains and culture aren’t set up to do this in a particularly nuanced way; we only know how to say “X” and “I think X” rather than sharing likelihood ratios.) Perhaps read the second half of this post as expressing anxiety about tone-policing of confident-sounding language being used for social status regulation rather than to optimize communication of actual uncertainty?
Nod, but then perhaps that part isn’t saying “lack-of-grace is a virtue” so much as “a certain kind of criticism of lack-of-grace is a vice”? (I haven’t reread with this possibility in mind.)
In any case, I think I’m fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.
I suppose you can say “if someone routinely talks with unjustified confidence, then eventually they’ll be wrong, and they can take the status hit then?” But
I think we can update faster than that. E.g. I recall Scott Adams said Trump would win 2016 with 99% probability or something? Trump did win, but I’m still comfortable judging this as overconfident without looking at his forecasting track record. (Though if someone were to look at his track record and found that he was well calibrated, I guess I’d have to be less comfortable.)
Often we never really learn the answer, e.g. with counterfactuals (“if the weather had been 3° colder that day, Hillary would have won”) or claims about what’s inside someone’s head (“they claim to sincerely believe X, but they obviously are just saying that to avoid censure”). “Is this confidence justified” is another example here.
The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes, because sometimes social grace calls for obfuscating shared maps.
Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.
Accuracy of shared maps is quantitative. A culture that’s optimized for social grace isn’t going to make people wrong about everything, and could make people less wrong about many things relative to many less graceful alternative cultures. (At minimum, if you’re not allowed be confident, you can’t be overconfident; if you’re not allowed to talk about what’s inside someone’s head, you can’t be wrong about what’s inside someone’s head.)
This sounds like it’s contrasting “criticism for being unjustified” against “criticism for social status regulation”. But those aren’t the same use of the word “for”, much like it would be weird to contrast “locking someone up for murder” against “locking someone up for deterrence”. (Though “for deterrence” might be a different “for” again, I’m not sure.)
To unpack, when I said
I didn’t intend to support someone being like “I want to do some social status regulation and I’m going to do it by tone policing some unjustified confidence”. I meant to support “this is unjustified confidence, I want less of this and to that end I’m going to do some social status regulation through the mechanism of tone policing”. I can’t tell if you’re yay-that or boo-that.
I guess that when you said
I basically ignored the “rather than...” and thought you were just opposed to tone-policing of confident sounding language in general. And the reason I did that might be that in my head, it’s surprising to talk about “tone policing for social status regulation, rather than tone policing to optimize communication”; rather, I’d expect to talk about “tone policing for social status regulation, in order to optimize communication”.
Scott Adams predicted Trump would win in a landslide. He wasn’t just overconfident, he was wrong! The fact that he’s not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google ‘Scott Adams Trump prediciton’ in Incognito, the first two results say “landslide” in the first ten seconds and title, respectively).
Your first case is an example of something much worse than not updating fast enough.
Thanks for the correction! Bad example on my part then.
My guess is that the point is clear and fairly undisputed, and coming up with an actually correct example wouldn’t be very helpful. Still a little embarrassing.