this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%
Being an “argument for” is anti-inductive, an argument stops working in either direction once it’s understood. You believe what you believe, at a level of credence you happen to have. You can make arguments. Others can change either understanding or belief in response to that. These things don’t need to be related. And there is nothing special about 50%.
I don’t get what you mean. Assuming you argue for X, but you don’t believe X, it would seem something is wrong, at least from the individual rationality perspective. For example, you argue that it raining outside without you believing that it is raining outside. This could e.g. be classified as lying (deception) or bullshitting (you don’t care about the truth).
What does “arguing for” mean? There’s expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it’s been considered, whether it had that effect or not. Repeating the argument won’t present an expectation of changing the mind of a person who already knows it, in either direction, so the argument is no longer an “argument for”. This is what I mean by anti-inductive.
Assuming you argue for X, but you don’t believe X
Suppose you don’t believe X, but someone doesn’t understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an “argument for” X? Should it be withheld, keeping the other’s understanding avoidably lacking?
What does “arguing for” mean? There’s expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it’s been considered, whether it had that effect or not.
Here is a proposal: A argues with Y for X iff A 1) claims that Y, and 2) that Y is evidence for X, in the sense that P(X|Y)>P(X|-Y). The latter can be considered true even if you already believe in Y.
Suppose you don’t believe X, but someone doesn’t understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an “argument for” X? Should it be withheld, keeping the other’s understanding avoidably lacking?
It seems that arguments provide evidence, and Y is evidence for X if and only if P(X|Y)>P(X|¬Y). That is, when X and Y are positively probabilistically dependent. If I think that they are positively dependent, and you think that they are not, then this won’t convince you of course.
Assuming you argue for X, but you don’t believe X, it would seem something is wrong, at least from the individual rationality perspective.
Belief is a matter of degree. If someone else thinks it’s 10% likely to be raining, and you believe it’s 40% likely to be raining, then we could summarize that as “both of you think it’s not raining”. And if you share some of your evidence and reasoning for thinking the probability is more like 40% than 10%, then we could maybe say that this isn’t really arguing for the proposition “it’s raining”, but rather the proposition “rain is likelier than you think” or “rain is 40% likely” or whatever.
But in both cases there’s something a bit odd about phrasing things this way, something that cuts a bit skew to reality. In reality there’s nothing special about the 50% point, and belief isn’t a binary. So I think part of the objection here is: maybe what you’re saying about belief and argument is technically true, but it’s weird to think and speak that way because in fact the cognitive act of assigning 40% probability to something is very similar to the act of assigning 60% probability to something, and the act of citing evidence for rain when you have the former belief is often just completely identical to the act of citing evidence for rain when you have the latter belief.
The issue for discourse is that beliefs do come in degrees, but when expressing them they lose this feature. Declarative statements are mostly discrete. (Saying “It’s raining outside” doesn’t communicate how strongly you believe it, except to more than 50% -- but again, the fan of championing will deny even that in certain discourse contexts.)
Talking explicitly about probabilities is a workaround, a hack where we still make binary statements, just about probabilities. But talking about probabilities is kind of unnatural, and people (even rationalists) rarely do it. Notice how both of us made a lot of declarative statements without indicating our degrees of belief in them. The best we can do, without using explicit probabilities, is using qualifiers like “I believe that”, “It might be that”, “It seems that”, “Probably”, “Possibly”, “Definitely”, “I’m pretty sure that” etc.
See
https://raw.githubusercontent.com/zonination/perceptions/master/joy1.png
Being an “argument for” is anti-inductive, an argument stops working in either direction once it’s understood. You believe what you believe, at a level of credence you happen to have. You can make arguments. Others can change either understanding or belief in response to that. These things don’t need to be related. And there is nothing special about 50%.
I don’t get what you mean. Assuming you argue for X, but you don’t believe X, it would seem something is wrong, at least from the individual rationality perspective. For example, you argue that it raining outside without you believing that it is raining outside. This could e.g. be classified as lying (deception) or bullshitting (you don’t care about the truth).
What does “arguing for” mean? There’s expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it’s been considered, whether it had that effect or not. Repeating the argument won’t present an expectation of changing the mind of a person who already knows it, in either direction, so the argument is no longer an “argument for”. This is what I mean by anti-inductive.
Suppose you don’t believe X, but someone doesn’t understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an “argument for” X? Should it be withheld, keeping the other’s understanding avoidably lacking?
Here is a proposal: A argues with Y for X iff A 1) claims that Y, and 2) that Y is evidence for X, in the sense that P(X|Y)>P(X|-Y). The latter can be considered true even if you already believe in Y.
I agree, that’s a good argument.
The best arguments confer no evidence, they guide you in putting together the pieces you already hold.
Yeah, aka Socratic dialogue.
Alice: I don’t believe X.
Bob: Don’t you believe Y? And don’t you believe If Y then X?
Alice: Okay I guess I do believe X.
The point is, conditional probability doesn’t capture the effect of arguments.
It seems that arguments provide evidence, and Y is evidence for X if and only if P(X|Y)>P(X|¬Y). That is, when X and Y are positively probabilistically dependent. If I think that they are positively dependent, and you think that they are not, then this won’t convince you of course.
Belief is a matter of degree. If someone else thinks it’s 10% likely to be raining, and you believe it’s 40% likely to be raining, then we could summarize that as “both of you think it’s not raining”. And if you share some of your evidence and reasoning for thinking the probability is more like 40% than 10%, then we could maybe say that this isn’t really arguing for the proposition “it’s raining”, but rather the proposition “rain is likelier than you think” or “rain is 40% likely” or whatever.
But in both cases there’s something a bit odd about phrasing things this way, something that cuts a bit skew to reality. In reality there’s nothing special about the 50% point, and belief isn’t a binary. So I think part of the objection here is: maybe what you’re saying about belief and argument is technically true, but it’s weird to think and speak that way because in fact the cognitive act of assigning 40% probability to something is very similar to the act of assigning 60% probability to something, and the act of citing evidence for rain when you have the former belief is often just completely identical to the act of citing evidence for rain when you have the latter belief.
The issue for discourse is that beliefs do come in degrees, but when expressing them they lose this feature. Declarative statements are mostly discrete. (Saying “It’s raining outside” doesn’t communicate how strongly you believe it, except to more than 50% -- but again, the fan of championing will deny even that in certain discourse contexts.)
Talking explicitly about probabilities is a workaround, a hack where we still make binary statements, just about probabilities. But talking about probabilities is kind of unnatural, and people (even rationalists) rarely do it. Notice how both of us made a lot of declarative statements without indicating our degrees of belief in them. The best we can do, without using explicit probabilities, is using qualifiers like “I believe that”, “It might be that”, “It seems that”, “Probably”, “Possibly”, “Definitely”, “I’m pretty sure that” etc. See https://raw.githubusercontent.com/zonination/perceptions/master/joy1.png