The response in the interview mistakes the nature of the original question. Not every meaningful statement exists on a binary truth spectrum. Not every meaningful statement corresponds to a claim of empirical fact with quantifiable uncertainty. Saying “I have complete confidence in someone” is not a probabilistic assertion akin to saying “I am 100% certain it will rain tomorrow.” Rather, it’s an expression of trust, commitment, or endorsement—often used in a social or political context to convey support, leadership assurance, or accountability.
Confidence in a person, especially in the context of hiring or appointment, isn’t a measurable probability about future events; it’s a communicative act that signals intent and judgment. It doesn’t imply literal omniscience or absolute predictive accuracy. By dodging the question with a pseudo-Bayesian appeal to uncertainty, the speaker appears evasive rather than thoughtful. In fact, framing it this way undermines the communicative clarity and purpose of leadership rhetoric, not to serve as epistemological disclaimers.
This exchange is a clear example of someone co-opting the language of the rationalist community removed from its intended meaning and purpose. The interviewer’s question wasn’t about epistemic certainty or making a falsifiable prediction; it was a straightforward request for a statement of support or trust in an appointed individual
it was a straightforward request for a statement of support or trust in an appointed individual
The Bayesian validity still seems pretty straightforward to me. I have more trust in some people than others, which I would suggest cashes out as my credence that they won’t do something that violates the commitments they’ve made (or violates their stated values, etc). And certainly I should never have 0% or 100% trust in that sense, or the standard objection applies: no evidence could shift my trust.
(that said, on one reading of your comment it’s veering close to object-level discussion of the wisdom or foolishness of Trump in particular, which I’d very much like to avoid here. Hopefully that’s just a misread)
The response in the interview mistakes the nature of the original question. Not every meaningful statement exists on a binary truth spectrum. Not every meaningful statement corresponds to a claim of empirical fact with quantifiable uncertainty. Saying “I have complete confidence in someone” is not a probabilistic assertion akin to saying “I am 100% certain it will rain tomorrow.” Rather, it’s an expression of trust, commitment, or endorsement—often used in a social or political context to convey support, leadership assurance, or accountability.
Confidence in a person, especially in the context of hiring or appointment, isn’t a measurable probability about future events; it’s a communicative act that signals intent and judgment. It doesn’t imply literal omniscience or absolute predictive accuracy. By dodging the question with a pseudo-Bayesian appeal to uncertainty, the speaker appears evasive rather than thoughtful. In fact, framing it this way undermines the communicative clarity and purpose of leadership rhetoric, not to serve as epistemological disclaimers.
This exchange is a clear example of someone co-opting the language of the rationalist community removed from its intended meaning and purpose. The interviewer’s question wasn’t about epistemic certainty or making a falsifiable prediction; it was a straightforward request for a statement of support or trust in an appointed individual
The Bayesian validity still seems pretty straightforward to me. I have more trust in some people than others, which I would suggest cashes out as my credence that they won’t do something that violates the commitments they’ve made (or violates their stated values, etc). And certainly I should never have 0% or 100% trust in that sense, or the standard objection applies: no evidence could shift my trust.
(that said, on one reading of your comment it’s veering close to object-level discussion of the wisdom or foolishness of Trump in particular, which I’d very much like to avoid here. Hopefully that’s just a misread)