Does thinking that A is 45% likely mean that you think the negation of A is 5% likely, or 55% likely? Don’t answer that; the negation is 55% likely.
But we can imagine making a judgment about someone’s personality. One human person accepts MBTI’s framework that thinking and feeling are mutually exclusive personalities, so when they write that someone has a 55% chance of being a thinker type, they make an implicit not-tracked judgment that they have an almost 45% chance of being a feeler type AND not a thinker, but a rational Bayesian is not so silly of course; being a feeler and/or a thinker are two independent questions, buddy.
The models in a person’s mind are predictable from the estimate on his paper, and while his estimate may be true, the models the predictions stem from may be deeply flawed.
By the logic of personality taxonomy and worldly relations, “the negation of A” has many connotations.
Maybe the trouble is with the words ‘negation’, ‘opposite’, and ‘falsehood’ instead of using the word ‘absence’. Presence of falsehood evidence is not the same as absence of truth evidence, even if absence of truth evidence is one kind of weak falsehood evidence to be present.
I read this twice and can’t pick up on what you’re thinking. You could focus your attention on your question and write more from within it (e.g., vague gesturing from different angles; toy problems / examples; partial formalizations; etc.).
One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU=”(Model Uncertainty) I’m confused, maybe the question doesn’t make sense, maybe A isn’t a coherent claim, maybe the concepts I used aren’t the right concepts to use, maybe I didn’t think of a possibility, etc. etc.”. Probability theory still makes sense, you can always ask e.g. “what am I seeing right now” and other pretty answerable questions, and use those as grounding of vaguer claims if necessary. But the point is, if usually A is a specific claim like “the virus spreads with R>2″, then the negation not-A could naturally be taken to mean “the virus spreads with R≤2, or the question is ill-defined, e.g. because R is very different in different places or something, or bakes in a confusion, e.g. there’s no virus or even there’s no such thing as a virus”. Then not-A is “getting extra probability” from the model uncertainty, vs A which seems to be a positive statement (it posits a state of affairs).
One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU=”(Model Uncertainty) I’m confused, maybe the question doesn’t make sense, maybe A isn’t a coherent claim, maybe the concepts I used aren’t the right concepts to use, maybe I didn’t think of a possibility, etc. etc.”.
I tend to think of writing my propositions in notepad like A: 75% B: 34% C: 60%
And so on. Are you telling me that “~A: 75%” means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect ‘A: 75%’ to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can’t a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?
Does thinking that A is 45% likely mean that you think the negation of A is 5% likely, or 55% likely? Don’t answer that; the negation is 55% likely.
But we can imagine making a judgment about someone’s personality. One human person accepts MBTI’s framework that thinking and feeling are mutually exclusive personalities, so when they write that someone has a 55% chance of being a thinker type, they make an implicit not-tracked judgment that they have an almost 45% chance of being a feeler type AND not a thinker, but a rational Bayesian is not so silly of course; being a feeler and/or a thinker are two independent questions, buddy.
The models in a person’s mind are predictable from the estimate on his paper, and while his estimate may be true, the models the predictions stem from may be deeply flawed.
By the logic of personality taxonomy and worldly relations, “the negation of A” has many connotations.
Maybe the trouble is with the words ‘negation’, ‘opposite’, and ‘falsehood’ instead of using the word ‘absence’. Presence of falsehood evidence is not the same as absence of truth evidence, even if absence of truth evidence is one kind of weak falsehood evidence to be present.
I read this twice and can’t pick up on what you’re thinking. You could focus your attention on your question and write more from within it (e.g., vague gesturing from different angles; toy problems / examples; partial formalizations; etc.).
One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU=”(Model Uncertainty) I’m confused, maybe the question doesn’t make sense, maybe A isn’t a coherent claim, maybe the concepts I used aren’t the right concepts to use, maybe I didn’t think of a possibility, etc. etc.”. Probability theory still makes sense, you can always ask e.g. “what am I seeing right now” and other pretty answerable questions, and use those as grounding of vaguer claims if necessary. But the point is, if usually A is a specific claim like “the virus spreads with R>2″, then the negation not-A could naturally be taken to mean “the virus spreads with R≤2, or the question is ill-defined, e.g. because R is very different in different places or something, or bakes in a confusion, e.g. there’s no virus or even there’s no such thing as a virus”. Then not-A is “getting extra probability” from the model uncertainty, vs A which seems to be a positive statement (it posits a state of affairs).
I tend to think of writing my propositions in notepad like
A: 75%
B: 34%
C: 60%
And so on. Are you telling me that “~A: 75%” means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect ‘A: 75%’ to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can’t a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?