I’m not as interested in proving my point, as in figuring out why people resist it so strongly. It seems people are eager to disagree with me and reluctant to agree with me.
How did the post make you feel, and why?
It’s not just their feelings, it’s their priors.
I’ve found previously that many people here are extremely hostile to criticisms of the statistical methods of the medical establishment. It’s extremely odd at a site that puts Jaynes on a pedestal, as no one rants more loudly and makes the case clearer than Jaynes did, but there it is.
But consider if you’re not a statistician. You’re not into the foundations of statistical inference. You haven’t read Jaynes. Maybe you’ve had one semester of statistics in your life. When you’re taught hypothesis testing, you’re taught a method. That method is statistics. There’s no discussion about foundations. And you look at medical journals. This is how they do it. This is how science is done. And if you’re a rationalist, you’re on Team Science.
Elsewhere in the thread, there are links to a Gigerenzer paper showing how statistics students and their professors are making fundamental errors in their interpretations of the results of confidence interval testing. If stat professors can’t get it right, the number of people who have any notion that there is a possibility of an issue is vanishingly small. Higher here than usual, but still a minority.
Meanwhile, you’ve shown up and attacked Team Science in general in medicine. To add the cherry on top, you did it in the context of exactly the kind of issue that Team Science most rants about—some “anecdotal hysteria” where parents are going ballistic about some substance that is supposedly harming their precious little lumpkins. But everyone knows there is nothing wrong with food dies. They’ve been around for decades. The authorities have tested them and declared them safe. There is no evidence for the claim, and it’s harmful to be drumming up these scares.
My PhD was in EE, doing statistical inference in a machine learning context. Rather unsophisticated stuff, but that’s something of the point. Science is not an integrated whole where the best thoughts, best practices, and best methods are used everywhere instantaneously. It takes decades, and sometimes wrong turns are taken. I’m talking to professional data analysts in big money companies, and they’ve never even heard of guys taken as canonical authorities in machine learning circles. I was reading Jaynes when his book was a series of postscript files floating around the web. The intro graduate classes I took from the stat department didn’t discuss any of these issues at all. I don’t think any class did. Anything I got in that regard I got from reading journal articles, academic mailing lists, and thinking about it myself. How many people have done that?
You’re saying that medical science as it is done sucks. You’re right, but what do you think the prior is on Team Science that Phil Goetz is right, and Team Science is wrong?
I’ve found previously that many people here are extremely hostile to criticisms of the statistical methods of the medical establishment. It’s extremely odd at a site that puts Jaynes on a pedestal, as no one rants more loudly and makes the case clearer than Jaynes did, but there it is.
(Eliezer does, anyway. I can’t say I see very many quotes or invocations from others.)
I am hostile to some criticisms, because in some cases when I see them being done online, it’s not in the spirit of ‘let us understand how these methods make this research fundamentally flawed, what this implies, and how much we can actually extract from this research’*, but in the spirit of ‘the earth is actually not spherical but an oblate spheroid thus you have been educated stupid and Time has Four Corners!’ Because the standard work has flaws, they feel free to jump to whatever random bullshit they like best. ‘Everything is true, nothing is forbidden.’
* eg. although extreme and much more work than I realistically expect anyone to do, I regard my dual n-back meta-analysis as a model of how to react to potentially valid criticisms. Instead of learning that passive control groups are a serious methodological issue which may be inflating the effect and then going around making that point at every discussion, and saying ‘if you really want to increase intelligence, you ought to try this new krill oil stuff!’, I compiled studies while noting whether they were active or passive, and eventually produced a meta-analytic regression confirming the prediction.
(And of course, sometimes I just think the criticism is being cargo-culted and doesn’t apply; an example on this very page is tenlier’s criticism of the lack of multiple testing correction, where he is parroting a ‘sophisticated’ criticism that does indeed undermine countless papers and lead to serious problems but the criticism simply isn’t right when he uses it, and when I explained in detail why he was misapplying it, he had the gall to sign off expressing disappointment in LW’s competency!)
Such criticism for most people seems to only enable confirmation bias or one-sided skepticism, along the lines of knowing about biases can hurt you.
A great example here is Seth Roberts. You can read his blog and see him posting a list of very acute questions for Posit Science on the actual validity of a study they ran on brain training, probing aspects like the endpoints, possibility of data mining, publication bias or selection effects and so on, and indeed, their response is pretty lame because it’s become increasingly clear that brain training does not cause far transfer and the methodological weaknesses are indeed driving some of the effects. And then you can flip the page and see him turn off his brain and repeat uncritically whatever random anecdote someone has emailed him that day, and make incredibly stupid claims like ‘evidence that butter is good for your brain comes from Indian religion’s veneration for ghee’ (and never mind that this is pulled out of countless random claims from all sorts of folk medicines; or that Ayurvedic medicine is completely groundless, random crap, and has problems with heavy metal poisoning!), and if you try sending him a null result on one of his pet effects, he won’t post it. It’s like there’s two different Roberts.
(Eliezer does, anyway. I can’t say I see very many quotes or invocations from others.)
Put Jaynes on a pedestal, you mean?
Hmmmn. I had that problem before with Korzybbski. I saw The Map is Not the Territory, and assumed people were familiar with Korzybski and his book Science and Sanity. Turns out even Eliezer hadn’t read it, and he got his general semantics influence from secondary writers such as Hayakawa. I’ve only found 1 guy here with confirmed knowledge of a breadth of general semantics literature.
How many people do you think have read substantial portions of Jaynes book? Have you?
Yes. To put it one way, a site search for ‘Jaynes’ (which will hit other people sometimes discussed on LW, like the Bicameral Mind Jaynes) turns up 718 hits; in contrast, to name some other statisticians or like-minded folks - ‘Ioannidis’ (which is hard to spell) turns up 89 results, ‘Cochrane’ 57, ‘Fisher’ 127, ‘Cohen’ 193, ‘Gelman’ 128, ‘Shalizi’ 109… So apparently in the LW-pantheon-of-statisticians-off-the-top-of-my-head, Jaynes can barely muster a majority (718 vs 703). For someone on a pedestal, he just isn’t discussed much.
How many people do you think have read substantial portions of Jaynes book?
Most of those in the book reading clubs fail, he is rarely quoted or cited… <5%.
Have you?
I bought a copy (sitting on my table now, actually), read up to chapter 4, some sections of other chapters that were interesting, and concluded that a number of reviews were correct in claiming it was not the best introduction for a naive non-statistician.
So I’ve been working through other courses/papers/books and running experiments and doing analyses of my own to learn statistics. I do plan to go back to Jaynes, but only once I have some more learning under my belt—the Probabilistic Graphical Models Coursera is starting today, and I’m going to see if I can handle it, and after that I’m going to look through and pick one of Kruschke’s Doing Bayesian Data Analysis, Sivia’s Data Analysis: A Bayesian Tutorial, Bolstad’s Introduction to Bayesian Statistics, and Albert’s Bayesian Computation with R. But we’ll see how things actually go.
The problem is that it’s very hard to change your world view or even to coherently understand the worldview of someone else. Understanding that you might be wrong about things you take for granted is hard.
Among new atheists even the notion that the nature of truth is up for discussion is a very threatening question.
Even if they would read Jaynes from cover to cover, they take the notion of truth they learned as children for granted and don’t think deeply about where Jaynes notion of truth differs from their own.
The discussion about Bayesianism with David Chapman illustrates how he and senior LW people didn’t even get clear about the points on which they disagree.
Among new atheists even the notion that the nature of truth is up for discussion is a very threatening question.
I don’t know if it’s threatening, and I doubt that it applies to Dennett, but the other guys can’t seem to even conceive of truth beyond correspondence.
But if it’s a matter of people being open to changing their world view, to even understanding that they have one, and other people have other world views, it’s Korzybski they need to read, not Jaynes.
The guy with the blog is Chapman?
I don’t see a discussion. I see a pretty good video, and blog comments that I don’t see any value at all in. I had characterized them more colorfully, but seeing that Chapman is on the list, I decided to remove the color.
I’m not trying to be rude here, but his comments are just very wrong about probability, and thereby entirely clueless about the people he is criticizing.
As an example
It’s all just arithmetic.
No! Probability as inference most decidedly is not “just arithmetic”. Math tells you nothing axiomatically about the world.. All our various mathematics are conceptual structures that may or may not be useful in the world.
That’s where Jaynes, and I guess Cox before him, adds in the magic. Jaynes doesn’t proceed axiomatically. He starts with problem of representing confidence in a computer, and proceeds to show how the solution to that problem entails certain mathematics. He doesn’t proceed by “proof by axiomatic definitions”, he shows that the conceptual structures work for the problem attacked.
Also, in Jaynes presentation of probability theory as an extension of logic, P(A|B) isn’t axiomatically defined as P(AB)/P(B), it is the mathematical value assigned to the plausibility of a proposition A given that proposition B is taken to be true. It’s not about counting, it’s about reasoning about the truth of propositions given our knowledge.
I guess if he’s failing utterly to understand what people are talking about, what they’re saying might look like ritual incantation to him. I’m sure it is for some people.
Is there some reason I should take David Chapman as particularly authoritative? Why do you find his disagreement with senior LW people of particular note?
Is there some reason I should take David Chapman as particularly authoritative? Why do you find his disagreement with senior LW people of particular note?
I think in total that exchange provides a foundation for clearing the question of what Bayesianism is. I do consider that an important question.
As far as authority goes David Chapman did publish academic papers about artificial intelligence. He did develop solutions for previously unsolved AI problems. When he says that there’s no sign of Bayes axiom in the code that he used to solve an AI problem he just might be right.
Funny, I’ve been making that point for a while. I doubt that it applies to Dennett, but the other guys can’t seem to conceive of truth beyond correspondence.
Dennett is pretty interesting. Instead of asking what various people mean when they say consciousness he just assumes he knows and declares it nonexistent. The idea that maybe he doesn’t understand what other people mean with the term doesn’t come up in his thought.
Dennett writes about how detailed visual hallucinations are impossible. I do have had experiences where what I visually perceived didn’t change much whether or not I closed my eyes. It was after I spent 5 days in artificial coma. I know two additional people who I meet face to face who have had similar experiences.
I also have access to various accounts of people hallucinating stuff in other context via hypnosis. My own ability let myself go is unfortunately not good, so I still lack some first hand accounts of some other hallucinations.
A week ago I spoke at our local LW meetup with someone who said that while “IQ” obviously exists “free will” obviously doesn’t. At that point in time I didn’t know exactly how to resolve the issue but it seems to me that those are both concept that exist somehow on the same level. You won’t find any IQ atoms and you won’t find any free will atoms but they are still mental concepts that can be used to model things about the real world.
That a problem that arises by not having a well defined idea of what it means for concepts to exist. In practice that leads to terms like depression getting defined by committee and written down in the DSM-V and people simply assuming that depression exists without asking themselves in what way it exists. If people would ask themselves in what way it exist that might provide ground for a new way to think about depression.
But if it’s a matter of people being open to changing their world view, to even understanding that they have one, and other people have other world views, it’s Korzybski they need to read, not Jaynes.
The problem with Korzybski is that he’s hard to read. Reading and understanding him, is going to be hard work for most people who are not exposed to that kind of thinking.
What might be more readable is Barry Smith’s paper “Against Fantology”. It’s only 20 pages.
the idea being that it would be possible to save the
fantological doctrine by denying the existence of those entities which cause it
problems. Many heirs of the fantological world view have in this way found it
possible to avoid the problems raised for their doctrines by apparent examples of
true predications in the category of substance by denying the existence of
substances.
I think that’s what the New Atheists like Dennett do. They simply pretend that the things that don’t fit in their worldview don’t exist.
I think you’re being unfair to Dennett. He actually has availed himself of the findings of other fields, and has been at the consciousness shtick for decades. He may not agree, but it’s unlikely he is unaware.
And when did he say consciousness was nonexistent?
Cite? That seems a rather odd thing for him to say, and not particularly in his ideological interests.
Dennett writes about how detailed visual hallucinations are impossible.
Cite here? Again, except for supernatural bogeymen, my experience of him is that he recognizes that all sorts of mental events exists, but maybe not in the way that people suppose.
They simply pretend that the things that don’t fit in their worldview don’t exist.
Not accurate. If those things don’t fit in their world views, they don’t exist in them, so they’re not pretending.
On a general brouhaha with CHapman, I seemed to miss most of that. He did one post on Jaynes and A_p, which I read as I’ve always been interested in that particular branch of Jaynes’ work. But the post made a fundamental mistake, IMO, and the opinion of others, and I think Chapman admitted as much before all of his exchanges were over. So even with Chapman running the scoreboard, he’s behind in points.
Well, for one thing, Chapman was (at least at one point) a genuine, credentialed AI researcher and a good fraction of content on Less Wrong seems to be a kind of armchair AI-research. That’s the outside view, anyway. The inside view (from my perspective) matches your evaluation: he seems just plain wrong.
I think a few people here are credentialed, or working on their credentials in machine learning.
But almost everything useful I learned, I learned by just reading the literature. There were three main guys I thought had good answers—David Wolpert, Jaynes, and Pearl. I think time has put it’s stamp on approval on my taste.
Reading more from Chapman, he seems fairly reasonable as far as AI goes, but he’s got a few ideological axes to grind against some straw men.
On his criticisms of LW and Bayesianism, is there anyone here who doesn’t realize you need algorithms and representations beyond Bayes Rule? I think not too long ago we had a similar straw man massacre where everyone said “yeah, we have algorithms that do information processing other than Bayes rule—duh”.
And he really should have stuck it out longer in AI, as Hinton has gone a long way to solving the problem Chapman thought was insurmountable—getting proper representation of the space to analyze from the data without human spoon feeding. You need a hidden variable model of the observable data, and should be able to get it from prediction of subsets of the observables using the other observables. That much was obvious, it just took Hinton to find a good way to do it. Others are coming up with generalized learning modules and mapping them to brain constructs. There was never any need to despair of progress.
but in the spirit of ‘the earth is actually not spherical but an oblate spheroid thus you have been educated stupid and Time has Four Corners!’ Because the standard work has flaws, they feel free to jump to whatever random bullshit they like best.
But you don’t have a complete fossil record, therefore Creationism!
Obviously that’s a problem. This somewhat confirms my comment to Phil, that linking the statistical issue to food dyes made reception of his claims harder as it better fit your pattern than a general statistical argument.
But from the numbers he reported, the basic eyeball test of the data leaves me thinking that food dyes may have an affect. Certainly if you take the data alone without priors, I’d conclude that more likely than not, food dyes have an effect. That’s how I would interpret the 84% significance threshold—probably there is a difference. Do you agree?
Unfortunately, I don’t have JAMA access to the paper to really look at the data, so I’m going by the 84% significance threshold.
I made up the 84% threshold in my example, to show what can happen in the worst case. In this study, what they found was that food dye decreased hyperactivity, but not enough to pass the threshold. (I don’t know what the threshold was or what confidence level it was set for; they didn’t say in the tables. I assume 95%.)
If they had passed the threshold, they would have concluded that food dye affects behavior, but would probably not have published because it would be an embarrassing outcome that both camps would attack.
Yes, I’m making a general argument about that mistaken conclusion. The F-test is especially tricky, because you know you’re going to find some difference between the groups. What difference D would you expect to find if there is in fact no effect? That’s a really hard question, and the F-test dodges it by using the arbitrary but standard 95% confidence interval to pick a higher threshold, F. Results between D and F would still support the hypothesis that there is an effect, while results below D would be evidence against that hypothesis. Not knowing what D is, we can’t say whether failure of an F-test is evidence for or against the hypothesis.
I’d add to the basic statistical problem the vast overgeneralization and bad decision theory.
You hit on one part of that, the generalization to the entire population.
People are different.
But even if they’re the same, U shaped response curves make it unlikely to find a signal. - you have to have the goldilocks amount to show an improvement. People vary over time. going in and out of the goldilocks range. So you when you add something, you’ll be pushing some people into the goldilocks range, and some people out.
It also comes from multiple paths to the same disease. A disease is a set of observable symptoms, not the varying particular causes of the same symptoms. Of course it’s hard to find the signal in a batch of people clustered into a dozen different underlying causes for the same symptoms.
But the bad decisions theory is the worst part, IMO. If you have a chronic problem, a 5% chance of a cure from a low risk, low cost intervention is great. But getting a 5% signal out of black box testing regimes biased against false positives is extremely unlikely, and the bias against interventions that “don’t work” keeps many doctors from trying perfectly safe treatments that have a reasonable chance of working.
The whole outlook is bad. It shouldn’t be “find me a proven cure that works for everyone”. It should be “find me interventions to control the system in a known way.” Get me knobs to turn, and let’s see if any of the knobs work for you.
Certainly if you take the data alone without priors, I’d conclude that more likely than not, food dyes have an effect. That’s how I would interpret the 84% significance threshold—probably there is a difference. Do you agree?
I haven’t looked but I suspect I would not agree and that you may be making the classic significance misinterpretation.
It’s not just their feelings, it’s their priors.
I’ve found previously that many people here are extremely hostile to criticisms of the statistical methods of the medical establishment. It’s extremely odd at a site that puts Jaynes on a pedestal, as no one rants more loudly and makes the case clearer than Jaynes did, but there it is.
But consider if you’re not a statistician. You’re not into the foundations of statistical inference. You haven’t read Jaynes. Maybe you’ve had one semester of statistics in your life. When you’re taught hypothesis testing, you’re taught a method. That method is statistics. There’s no discussion about foundations. And you look at medical journals. This is how they do it. This is how science is done. And if you’re a rationalist, you’re on Team Science.
Elsewhere in the thread, there are links to a Gigerenzer paper showing how statistics students and their professors are making fundamental errors in their interpretations of the results of confidence interval testing. If stat professors can’t get it right, the number of people who have any notion that there is a possibility of an issue is vanishingly small. Higher here than usual, but still a minority.
Meanwhile, you’ve shown up and attacked Team Science in general in medicine. To add the cherry on top, you did it in the context of exactly the kind of issue that Team Science most rants about—some “anecdotal hysteria” where parents are going ballistic about some substance that is supposedly harming their precious little lumpkins. But everyone knows there is nothing wrong with food dies. They’ve been around for decades. The authorities have tested them and declared them safe. There is no evidence for the claim, and it’s harmful to be drumming up these scares.
My PhD was in EE, doing statistical inference in a machine learning context. Rather unsophisticated stuff, but that’s something of the point. Science is not an integrated whole where the best thoughts, best practices, and best methods are used everywhere instantaneously. It takes decades, and sometimes wrong turns are taken. I’m talking to professional data analysts in big money companies, and they’ve never even heard of guys taken as canonical authorities in machine learning circles. I was reading Jaynes when his book was a series of postscript files floating around the web. The intro graduate classes I took from the stat department didn’t discuss any of these issues at all. I don’t think any class did. Anything I got in that regard I got from reading journal articles, academic mailing lists, and thinking about it myself. How many people have done that?
You’re saying that medical science as it is done sucks. You’re right, but what do you think the prior is on Team Science that Phil Goetz is right, and Team Science is wrong?
(Eliezer does, anyway. I can’t say I see very many quotes or invocations from others.)
I am hostile to some criticisms, because in some cases when I see them being done online, it’s not in the spirit of ‘let us understand how these methods make this research fundamentally flawed, what this implies, and how much we can actually extract from this research’*, but in the spirit of ‘the earth is actually not spherical but an oblate spheroid thus you have been educated stupid and Time has Four Corners!’ Because the standard work has flaws, they feel free to jump to whatever random bullshit they like best. ‘Everything is true, nothing is forbidden.’
* eg. although extreme and much more work than I realistically expect anyone to do, I regard my dual n-back meta-analysis as a model of how to react to potentially valid criticisms. Instead of learning that passive control groups are a serious methodological issue which may be inflating the effect and then going around making that point at every discussion, and saying ‘if you really want to increase intelligence, you ought to try this new krill oil stuff!’, I compiled studies while noting whether they were active or passive, and eventually produced a meta-analytic regression confirming the prediction.
(And of course, sometimes I just think the criticism is being cargo-culted and doesn’t apply; an example on this very page is tenlier’s criticism of the lack of multiple testing correction, where he is parroting a ‘sophisticated’ criticism that does indeed undermine countless papers and lead to serious problems but the criticism simply isn’t right when he uses it, and when I explained in detail why he was misapplying it, he had the gall to sign off expressing disappointment in LW’s competency!)
Such criticism for most people seems to only enable confirmation bias or one-sided skepticism, along the lines of knowing about biases can hurt you.
A great example here is Seth Roberts. You can read his blog and see him posting a list of very acute questions for Posit Science on the actual validity of a study they ran on brain training, probing aspects like the endpoints, possibility of data mining, publication bias or selection effects and so on, and indeed, their response is pretty lame because it’s become increasingly clear that brain training does not cause far transfer and the methodological weaknesses are indeed driving some of the effects. And then you can flip the page and see him turn off his brain and repeat uncritically whatever random anecdote someone has emailed him that day, and make incredibly stupid claims like ‘evidence that butter is good for your brain comes from Indian religion’s veneration for ghee’ (and never mind that this is pulled out of countless random claims from all sorts of folk medicines; or that Ayurvedic medicine is completely groundless, random crap, and has problems with heavy metal poisoning!), and if you try sending him a null result on one of his pet effects, he won’t post it. It’s like there’s two different Roberts.
Put Jaynes on a pedestal, you mean?
Hmmmn. I had that problem before with Korzybbski. I saw The Map is Not the Territory, and assumed people were familiar with Korzybski and his book Science and Sanity. Turns out even Eliezer hadn’t read it, and he got his general semantics influence from secondary writers such as Hayakawa. I’ve only found 1 guy here with confirmed knowledge of a breadth of general semantics literature.
How many people do you think have read substantial portions of Jaynes book? Have you?
Yes. To put it one way, a site search for ‘Jaynes’ (which will hit other people sometimes discussed on LW, like the Bicameral Mind Jaynes) turns up 718 hits; in contrast, to name some other statisticians or like-minded folks - ‘Ioannidis’ (which is hard to spell) turns up 89 results, ‘Cochrane’ 57, ‘Fisher’ 127, ‘Cohen’ 193, ‘Gelman’ 128, ‘Shalizi’ 109… So apparently in the LW-pantheon-of-statisticians-off-the-top-of-my-head, Jaynes can barely muster a majority (718 vs 703). For someone on a pedestal, he just isn’t discussed much.
Most of those in the book reading clubs fail, he is rarely quoted or cited… <5%.
I bought a copy (sitting on my table now, actually), read up to chapter 4, some sections of other chapters that were interesting, and concluded that a number of reviews were correct in claiming it was not the best introduction for a naive non-statistician.
So I’ve been working through other courses/papers/books and running experiments and doing analyses of my own to learn statistics. I do plan to go back to Jaynes, but only once I have some more learning under my belt—the Probabilistic Graphical Models Coursera is starting today, and I’m going to see if I can handle it, and after that I’m going to look through and pick one of Kruschke’s Doing Bayesian Data Analysis, Sivia’s Data Analysis: A Bayesian Tutorial, Bolstad’s Introduction to Bayesian Statistics, and Albert’s Bayesian Computation with R. But we’ll see how things actually go.
The problem is that it’s very hard to change your world view or even to coherently understand the worldview of someone else. Understanding that you might be wrong about things you take for granted is hard.
Among new atheists even the notion that the nature of truth is up for discussion is a very threatening question.
Even if they would read Jaynes from cover to cover, they take the notion of truth they learned as children for granted and don’t think deeply about where Jaynes notion of truth differs from their own.
The discussion about Bayesianism with David Chapman illustrates how he and senior LW people didn’t even get clear about the points on which they disagree.
I don’t know if it’s threatening, and I doubt that it applies to Dennett, but the other guys can’t seem to even conceive of truth beyond correspondence.
But if it’s a matter of people being open to changing their world view, to even understanding that they have one, and other people have other world views, it’s Korzybski they need to read, not Jaynes.
The guy with the blog is Chapman?
I don’t see a discussion. I see a pretty good video, and blog comments that I don’t see any value at all in. I had characterized them more colorfully, but seeing that Chapman is on the list, I decided to remove the color.
I’m not trying to be rude here, but his comments are just very wrong about probability, and thereby entirely clueless about the people he is criticizing.
As an example
No! Probability as inference most decidedly is not “just arithmetic”. Math tells you nothing axiomatically about the world.. All our various mathematics are conceptual structures that may or may not be useful in the world.
That’s where Jaynes, and I guess Cox before him, adds in the magic. Jaynes doesn’t proceed axiomatically. He starts with problem of representing confidence in a computer, and proceeds to show how the solution to that problem entails certain mathematics. He doesn’t proceed by “proof by axiomatic definitions”, he shows that the conceptual structures work for the problem attacked.
Also, in Jaynes presentation of probability theory as an extension of logic, P(A|B) isn’t axiomatically defined as P(AB)/P(B), it is the mathematical value assigned to the plausibility of a proposition A given that proposition B is taken to be true. It’s not about counting, it’s about reasoning about the truth of propositions given our knowledge.
I guess if he’s failing utterly to understand what people are talking about, what they’re saying might look like ritual incantation to him. I’m sure it is for some people.
Is there some reason I should take David Chapman as particularly authoritative? Why do you find his disagreement with senior LW people of particular note?
Because senior LW people spent effort in replying to him. The post lead to LW posts such as what bayesianism taught me. Scott Alexander wrote in response: on first looking into chapmans pop-bayesianism. Kaj Sotala had a lively exchange in the comments of that article.
I think in total that exchange provides a foundation for clearing the question of what Bayesianism is. I do consider that an important question.
As far as authority goes David Chapman did publish academic papers about artificial intelligence. He did develop solutions for previously unsolved AI problems. When he says that there’s no sign of Bayes axiom in the code that he used to solve an AI problem he just might be right.
Dennett is pretty interesting. Instead of asking what various people mean when they say consciousness he just assumes he knows and declares it nonexistent. The idea that maybe he doesn’t understand what other people mean with the term doesn’t come up in his thought.
Dennett writes about how detailed visual hallucinations are impossible. I do have had experiences where what I visually perceived didn’t change much whether or not I closed my eyes. It was after I spent 5 days in artificial coma. I know two additional people who I meet face to face who have had similar experiences.
I also have access to various accounts of people hallucinating stuff in other context via hypnosis. My own ability let myself go is unfortunately not good, so I still lack some first hand accounts of some other hallucinations.
A week ago I spoke at our local LW meetup with someone who said that while “IQ” obviously exists “free will” obviously doesn’t. At that point in time I didn’t know exactly how to resolve the issue but it seems to me that those are both concept that exist somehow on the same level. You won’t find any IQ atoms and you won’t find any free will atoms but they are still mental concepts that can be used to model things about the real world.
That a problem that arises by not having a well defined idea of what it means for concepts to exist. In practice that leads to terms like depression getting defined by committee and written down in the DSM-V and people simply assuming that depression exists without asking themselves in what way it exists. If people would ask themselves in what way it exist that might provide ground for a new way to think about depression.
The problem with Korzybski is that he’s hard to read. Reading and understanding him, is going to be hard work for most people who are not exposed to that kind of thinking.
What might be more readable is Barry Smith’s paper “Against Fantology”. It’s only 20 pages.
I think that’s what the New Atheists like Dennett do. They simply pretend that the things that don’t fit in their worldview don’t exist.
I think you’re being unfair to Dennett. He actually has availed himself of the findings of other fields, and has been at the consciousness shtick for decades. He may not agree, but it’s unlikely he is unaware.
And when did he say consciousness was nonexistent?
Cite? That seems a rather odd thing for him to say, and not particularly in his ideological interests.
Cite here? Again, except for supernatural bogeymen, my experience of him is that he recognizes that all sorts of mental events exists, but maybe not in the way that people suppose.
Not accurate. If those things don’t fit in their world views, they don’t exist in them, so they’re not pretending.
On a general brouhaha with CHapman, I seemed to miss most of that. He did one post on Jaynes and A_p, which I read as I’ve always been interested in that particular branch of Jaynes’ work. But the post made a fundamental mistake, IMO, and the opinion of others, and I think Chapman admitted as much before all of his exchanges were over. So even with Chapman running the scoreboard, he’s behind in points.
Well, for one thing, Chapman was (at least at one point) a genuine, credentialed AI researcher and a good fraction of content on Less Wrong seems to be a kind of armchair AI-research. That’s the outside view, anyway. The inside view (from my perspective) matches your evaluation: he seems just plain wrong.
I think a few people here are credentialed, or working on their credentials in machine learning.
But almost everything useful I learned, I learned by just reading the literature. There were three main guys I thought had good answers—David Wolpert, Jaynes, and Pearl. I think time has put it’s stamp on approval on my taste.
Reading more from Chapman, he seems fairly reasonable as far as AI goes, but he’s got a few ideological axes to grind against some straw men.
On his criticisms of LW and Bayesianism, is there anyone here who doesn’t realize you need algorithms and representations beyond Bayes Rule? I think not too long ago we had a similar straw man massacre where everyone said “yeah, we have algorithms that do information processing other than Bayes rule—duh”.
And he really should have stuck it out longer in AI, as Hinton has gone a long way to solving the problem Chapman thought was insurmountable—getting proper representation of the space to analyze from the data without human spoon feeding. You need a hidden variable model of the observable data, and should be able to get it from prediction of subsets of the observables using the other observables. That much was obvious, it just took Hinton to find a good way to do it. Others are coming up with generalized learning modules and mapping them to brain constructs. There was never any need to despair of progress.
But you don’t have a complete fossil record, therefore Creationism!
Obviously that’s a problem. This somewhat confirms my comment to Phil, that linking the statistical issue to food dyes made reception of his claims harder as it better fit your pattern than a general statistical argument.
But from the numbers he reported, the basic eyeball test of the data leaves me thinking that food dyes may have an affect. Certainly if you take the data alone without priors, I’d conclude that more likely than not, food dyes have an effect. That’s how I would interpret the 84% significance threshold—probably there is a difference. Do you agree?
Unfortunately, I don’t have JAMA access to the paper to really look at the data, so I’m going by the 84% significance threshold.
I made up the 84% threshold in my example, to show what can happen in the worst case. In this study, what they found was that food dye decreased hyperactivity, but not enough to pass the threshold. (I don’t know what the threshold was or what confidence level it was set for; they didn’t say in the tables. I assume 95%.)
If they had passed the threshold, they would have concluded that food dye affects behavior, but would probably not have published because it would be an embarrassing outcome that both camps would attack.
To be clear, then, you’re not claiming that any evidence in the paper amounts to any kind of good evidence that an effect exists?
You’re making a general argument about the mistaken conclusion of jumping from “failure to reject the null” to a denial that any effect exists.
Yes, I’m making a general argument about that mistaken conclusion. The F-test is especially tricky, because you know you’re going to find some difference between the groups. What difference D would you expect to find if there is in fact no effect? That’s a really hard question, and the F-test dodges it by using the arbitrary but standard 95% confidence interval to pick a higher threshold, F. Results between D and F would still support the hypothesis that there is an effect, while results below D would be evidence against that hypothesis. Not knowing what D is, we can’t say whether failure of an F-test is evidence for or against the hypothesis.
I’d add to the basic statistical problem the vast overgeneralization and bad decision theory.
You hit on one part of that, the generalization to the entire population.
People are different.
But even if they’re the same, U shaped response curves make it unlikely to find a signal. - you have to have the goldilocks amount to show an improvement. People vary over time. going in and out of the goldilocks range. So you when you add something, you’ll be pushing some people into the goldilocks range, and some people out.
It also comes from multiple paths to the same disease. A disease is a set of observable symptoms, not the varying particular causes of the same symptoms. Of course it’s hard to find the signal in a batch of people clustered into a dozen different underlying causes for the same symptoms.
But the bad decisions theory is the worst part, IMO. If you have a chronic problem, a 5% chance of a cure from a low risk, low cost intervention is great. But getting a 5% signal out of black box testing regimes biased against false positives is extremely unlikely, and the bias against interventions that “don’t work” keeps many doctors from trying perfectly safe treatments that have a reasonable chance of working.
The whole outlook is bad. It shouldn’t be “find me a proven cure that works for everyone”. It should be “find me interventions to control the system in a known way.” Get me knobs to turn, and let’s see if any of the knobs work for you.
I believe Knight posted links to fulltext at http://lesswrong.com/lw/h56/the_universal_medical_journal_article_error/8pne
I haven’t looked but I suspect I would not agree and that you may be making the classic significance misinterpretation.