is an invalid argument, yes. My problem is that I don’t know what an argument like
Most Fs are Gs.
H is an F.
Probably, H is a G.
is even meant to mean.
Well, to digress a bit, the real problem is I’m not sure if any of this nonsense is actually getting to the heart of the issue, which is that probabilistic arguments aren’t really logical arguments at all. Not in the sense that they’re illogical or invalid or anything, but the whole system of bayesian reasoning just doesn’t really map 1:1 onto logic.
What I mean by this is that a logical brain, as one might design one, would have a small pool of statements, the belief pool, which it would add to as observations or deductions are made. A maze solving robot, for example, might have beliefs such as {at time t=0 I was at START, at time t=1 I was at (1,0), at time t=1 there was a wall on my left, …}. It would add to the belief pool as facts about the robot’s location and the maze are discovered, but never remove a statement from the pool, since the pool contains only certainties.
Logical arguments, like “If at any time there was a wall on my left and I was at position P, then the maze has a wall at configuration Q” are useful to this robot, since it can use them to fill its belief pool with such arguments’ conclusions. Moreover, a classification of arguments into valid and invalid is useful for this robot, so that it can ignore the ones which could result in introducing false statements into its belief pool.
You can’t really do the same thing with probabilities. The closest thing to a representation of probabilistic reasoning in logic is the mathematical deduction of statements about conditional probability, with conclusions like P(A | evidence XYZ) = 0.462. When you encounter new observations you use them by trying to generate theorems of the form P(X | all previous evidence + the new evidence) = Y, whereupon you can then plug X and Y into your expected utility calculations or whatever.
In this system an argument like “X% of F are G, and H is an F, so H is probably G” isn’t really an argument where you can then import the resulting conclusion into your “belief set”, because there isn’t any such thing. If the argument means anything at all, it’s as an informal derivation of P(H is G | all relevant evidence) after informing the reader that X% of F are G, and H is an F, assuming that the reader doesn’t have any other relevant evidence. It wouldn’t make sense to say that this argument is invalid since H might not be a G, because it’s not asserting that H is G, it’s asserting that P(H is G | relevant evidence) = y.
Damn, you edited your comment >.<
We are in agreement that
is an invalid argument, yes. My problem is that I don’t know what an argument like
is even meant to mean.
Well, to digress a bit, the real problem is I’m not sure if any of this nonsense is actually getting to the heart of the issue, which is that probabilistic arguments aren’t really logical arguments at all. Not in the sense that they’re illogical or invalid or anything, but the whole system of bayesian reasoning just doesn’t really map 1:1 onto logic.
What I mean by this is that a logical brain, as one might design one, would have a small pool of statements, the belief pool, which it would add to as observations or deductions are made. A maze solving robot, for example, might have beliefs such as {at time t=0 I was at START, at time t=1 I was at (1,0), at time t=1 there was a wall on my left, …}. It would add to the belief pool as facts about the robot’s location and the maze are discovered, but never remove a statement from the pool, since the pool contains only certainties.
Logical arguments, like “If at any time there was a wall on my left and I was at position P, then the maze has a wall at configuration Q” are useful to this robot, since it can use them to fill its belief pool with such arguments’ conclusions. Moreover, a classification of arguments into valid and invalid is useful for this robot, so that it can ignore the ones which could result in introducing false statements into its belief pool.
You can’t really do the same thing with probabilities. The closest thing to a representation of probabilistic reasoning in logic is the mathematical deduction of statements about conditional probability, with conclusions like
P(A | evidence XYZ) = 0.462
. When you encounter new observations you use them by trying to generate theorems of the formP(X | all previous evidence + the new evidence) = Y
, whereupon you can then plug X and Y into your expected utility calculations or whatever.In this system an argument like “X% of F are G, and H is an F, so H is probably G” isn’t really an argument where you can then import the resulting conclusion into your “belief set”, because there isn’t any such thing. If the argument means anything at all, it’s as an informal derivation of
P(H is G | all relevant evidence)
after informing the reader that X% of F are G, and H is an F, assuming that the reader doesn’t have any other relevant evidence. It wouldn’t make sense to say that this argument is invalid since H might not be a G, because it’s not asserting that H is G, it’s asserting thatP(H is G | relevant evidence) = y
.