thanks for your interesting and thoughtful response. Possibly I should have used another example. There are other, more clearcut cases in e.g. the postmodernist tradition, but I wanted someone more well-known.
The reason I chose him was not to signal loyalty to the STEM tribe, but rather because he is taken to be a textbook example of irrationality by Popper and Gellner, two of my favourite philosophers. Popper claimed that Freud’s theories were unfalsifiable and that for any possible event E, both E and not-E was standardly taken to confirm his theories. This is inconsistent with probability theory, as pointed out in “Conservation of Expected Evidence” (which is a very Popperian post). The reason Freud and his followers (I think that some people have thought that some of his followers were actually worse on this point than Freud) did this mistake (if they did) presumably was confirmation bias (falsificationism can be seen as a tool to counter confirmation bias).
There is a huge literature on whether this claim is actually true. I have read Freud and Gellner’s (to my mind very interesting) book on psycho-analysis, as well as some of Popper’s texts on the topic, so I’m not merely repeating ideas I’ve heard from others. That said, I don’t know the subject well enough to go into a detailed discussion of your claims. Also, it’s sort of tangential to the topic. My point was not to bash Freud—that was so to say a side-effect of my claim.
Regarding your historical claims, I think that it’s very hard to establish who introduced nebolous ideas such as Freud’s tripartite model of the mind. Some claim that Plato’s theory of the mind foreshadowed it. Gellner claims that all good original ideas in Freud are taken from Nietzsche. I don’t know enough of the topic to determine whether any of these claims are true, but in order to establish whether they are, or whether Freud really was as significant and original as you claim, one would need to take a deep plunge into the history of ideas.
For the record, that long comment was not completely directed to you; it was something I have already thought should be written, and reading your comment was simply the moment when my inaction changed to action.
People are full of biases and rationalizations, and if you give them a theory which says “actually, other people often don’t even know what happens in their own minds”, well, that can hurt them regardless of whether the theory is true. And yes, this is what most amateur “psychologists” do after seeing “psychoanalysis” done on TV and learning the relevant keywords. And I guess not a few professional psychologists are not better than this. And yes, it made it difficult to argue against Freud in cases he was wrong.
Still, as I wrote, he was capable of changing his mind. And other psychoanalysts later disagreed on some topics. But without proper scientific method we can’t be sure that these changes really were improvements, as opposed to random drift (“I am a high-status psychoanalyst, so I will signal it by adding my random opinion to our set of sacred beliefs”).
Some parts of psychoanalysis make predictions; the problem is that unlike in physics, humans can react in many different ways. It’s like a black-box testing where each “box” is internally wired differently. We do have a prediction that a dream will contain a censored version of a suppressed desire. And it feels like it should be testable. But how specifically will the desire be censored? Uhm… this depends on the specific person, on what associations they have, so again we can suspect than any result could be “explained” as some form of censorship of something.
According to wikipedia Popper compared Freud with Einstein, as two people living in the same era, whose scientific rigor was completely different. Yeah, there was a huge difference. There was also a huge difference in the amount and quality of data they had, the available tools, the complexity of the studied objects, and the general waterline of sanity in their fields. (Again, “it’s magic” and “people actually don’t think” were the respected alternative theories. Imagine starting in a similar position in physics.)
Like I said, there is a huge discussion on this issue in the philosophy of science. My guess is that most of your arguments above have already been discussed extensively.
Grünbaum’s book is considered a classic on the subject and might be a place to start (I haven’t read it, though Gellner refers a lot to it). He is critical of psycho-analysis but rejects Popper’s view of it as a pseudo-science.
There are other, more clearcut cases in e.g. the postmodernist tradition
By how many standard deviations of the general public would you predict analytical philosophers or physicists outperform academic postmodernists once Stanovich test is ready?
Oh I don’t know. I think I’ve met some pretty irrational analytical philosophers too, actually. But I would expect the difference to be substantial, yes. Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Irresponsibility is something very different than irrationality.
Do you judge postmodernists because their tribe does things that you don’t like or do you judge them because you think the average postmodernist would score less on a proper Rationality Quotient test than members of other tribes?
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable. I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
Sokal actually wrote a book with Jean Bricmont indicating that this was far from an isolated anecdote. Also my judgement from having (had to) read quite a bit of postmodernist crap is that Sokal is spot on.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find.
That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
Your example does not address this case—it’s about different tests producing different results.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt. The only thing that Sokal was able to trick them into believing was that the article was intended to be sound philosophy.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt.
That’s like excusing oneself from causing a car crash on the grounds of being drunk.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt.
Sokal is a physicist, and a publication like this would have been a major embarassment inside his field. So he had no choice not to disclose the hoax before anyone else (who maybe didn’t get the joke) would have commented.
Hi Viliam,
thanks for your interesting and thoughtful response. Possibly I should have used another example. There are other, more clearcut cases in e.g. the postmodernist tradition, but I wanted someone more well-known.
The reason I chose him was not to signal loyalty to the STEM tribe, but rather because he is taken to be a textbook example of irrationality by Popper and Gellner, two of my favourite philosophers. Popper claimed that Freud’s theories were unfalsifiable and that for any possible event E, both E and not-E was standardly taken to confirm his theories. This is inconsistent with probability theory, as pointed out in “Conservation of Expected Evidence” (which is a very Popperian post). The reason Freud and his followers (I think that some people have thought that some of his followers were actually worse on this point than Freud) did this mistake (if they did) presumably was confirmation bias (falsificationism can be seen as a tool to counter confirmation bias).
There is a huge literature on whether this claim is actually true. I have read Freud and Gellner’s (to my mind very interesting) book on psycho-analysis, as well as some of Popper’s texts on the topic, so I’m not merely repeating ideas I’ve heard from others. That said, I don’t know the subject well enough to go into a detailed discussion of your claims. Also, it’s sort of tangential to the topic. My point was not to bash Freud—that was so to say a side-effect of my claim.
Regarding your historical claims, I think that it’s very hard to establish who introduced nebolous ideas such as Freud’s tripartite model of the mind. Some claim that Plato’s theory of the mind foreshadowed it. Gellner claims that all good original ideas in Freud are taken from Nietzsche. I don’t know enough of the topic to determine whether any of these claims are true, but in order to establish whether they are, or whether Freud really was as significant and original as you claim, one would need to take a deep plunge into the history of ideas.
For the record, that long comment was not completely directed to you; it was something I have already thought should be written, and reading your comment was simply the moment when my inaction changed to action.
People are full of biases and rationalizations, and if you give them a theory which says “actually, other people often don’t even know what happens in their own minds”, well, that can hurt them regardless of whether the theory is true. And yes, this is what most amateur “psychologists” do after seeing “psychoanalysis” done on TV and learning the relevant keywords. And I guess not a few professional psychologists are not better than this. And yes, it made it difficult to argue against Freud in cases he was wrong.
Still, as I wrote, he was capable of changing his mind. And other psychoanalysts later disagreed on some topics. But without proper scientific method we can’t be sure that these changes really were improvements, as opposed to random drift (“I am a high-status psychoanalyst, so I will signal it by adding my random opinion to our set of sacred beliefs”).
Some parts of psychoanalysis make predictions; the problem is that unlike in physics, humans can react in many different ways. It’s like a black-box testing where each “box” is internally wired differently. We do have a prediction that a dream will contain a censored version of a suppressed desire. And it feels like it should be testable. But how specifically will the desire be censored? Uhm… this depends on the specific person, on what associations they have, so again we can suspect than any result could be “explained” as some form of censorship of something.
According to wikipedia Popper compared Freud with Einstein, as two people living in the same era, whose scientific rigor was completely different. Yeah, there was a huge difference. There was also a huge difference in the amount and quality of data they had, the available tools, the complexity of the studied objects, and the general waterline of sanity in their fields. (Again, “it’s magic” and “people actually don’t think” were the respected alternative theories. Imagine starting in a similar position in physics.)
Like I said, there is a huge discussion on this issue in the philosophy of science. My guess is that most of your arguments above have already been discussed extensively.
Grünbaum’s book is considered a classic on the subject and might be a place to start (I haven’t read it, though Gellner refers a lot to it). He is critical of psycho-analysis but rejects Popper’s view of it as a pseudo-science.
By how many standard deviations of the general public would you predict analytical philosophers or physicists outperform academic postmodernists once Stanovich test is ready?
Oh I don’t know. I think I’ve met some pretty irrational analytical philosophers too, actually. But I would expect the difference to be substantial, yes. Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Irresponsibility is something very different than irrationality.
Do you judge postmodernists because their tribe does things that you don’t like or do you judge them because you think the average postmodernist would score less on a proper Rationality Quotient test than members of other tribes?
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable. I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
Sokal actually wrote a book with Jean Bricmont indicating that this was far from an isolated anecdote. Also my judgement from having (had to) read quite a bit of postmodernist crap is that Sokal is spot on.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find. That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt. The only thing that Sokal was able to trick them into believing was that the article was intended to be sound philosophy.
That’s like excusing oneself from causing a car crash on the grounds of being drunk.
In what way? Who was injured?
They are both pleading incompetence as an excuse for failure.
We only know that’s what they said afterwards.
By the same argument, we only know it was intended to be a hoax because Sokal said so afterward....
Sokal is a physicist, and a publication like this would have been a major embarassment inside his field. So he had no choice not to disclose the hoax before anyone else (who maybe didn’t get the joke) would have commented.