Oh I don’t know. I think I’ve met some pretty irrational analytical philosophers too, actually. But I would expect the difference to be substantial, yes. Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Irresponsibility is something very different than irrationality.
Do you judge postmodernists because their tribe does things that you don’t like or do you judge them because you think the average postmodernist would score less on a proper Rationality Quotient test than members of other tribes?
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable. I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
Sokal actually wrote a book with Jean Bricmont indicating that this was far from an isolated anecdote. Also my judgement from having (had to) read quite a bit of postmodernist crap is that Sokal is spot on.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find.
That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
Your example does not address this case—it’s about different tests producing different results.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt. The only thing that Sokal was able to trick them into believing was that the article was intended to be sound philosophy.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt.
That’s like excusing oneself from causing a car crash on the grounds of being drunk.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt.
Sokal is a physicist, and a publication like this would have been a major embarassment inside his field. So he had no choice not to disclose the hoax before anyone else (who maybe didn’t get the joke) would have commented.
Oh I don’t know. I think I’ve met some pretty irrational analytical philosophers too, actually. But I would expect the difference to be substantial, yes. Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Irresponsibility is something very different than irrationality.
Do you judge postmodernists because their tribe does things that you don’t like or do you judge them because you think the average postmodernist would score less on a proper Rationality Quotient test than members of other tribes?
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable. I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
Sokal actually wrote a book with Jean Bricmont indicating that this was far from an isolated anecdote. Also my judgement from having (had to) read quite a bit of postmodernist crap is that Sokal is spot on.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find. That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt. The only thing that Sokal was able to trick them into believing was that the article was intended to be sound philosophy.
That’s like excusing oneself from causing a car crash on the grounds of being drunk.
In what way? Who was injured?
They are both pleading incompetence as an excuse for failure.
We only know that’s what they said afterwards.
By the same argument, we only know it was intended to be a hoax because Sokal said so afterward....
Sokal is a physicist, and a publication like this would have been a major embarassment inside his field. So he had no choice not to disclose the hoax before anyone else (who maybe didn’t get the joke) would have commented.