You’d need to spell out more precisely what he’s doing that you think deserves criticism.
Interestingly I seem to have read quite a few of the “classics” that come up in that discussion on “what science does”. Polanyi’s Personal Knowledge, Feyerabend’s Against Method, Lakatos’ Proofs and Refutations, Kuhn’s Structure of Scientific Revolutions. Not Popper however—I’ve read The Open Society but not his other works.
Given your stance on “explaining” those strike me as good examples of the kind of stuff you might want to have read because that would leave you in a better position to criticize what you’re criticizing: less prone to misrepresenting it. (As for me, I’m now investing a lot of time and energy into this “Bayesian” stuff, which definitely is sort of a counterpoint to my prior leanings.)
You’d need to spell out more precisely what [Gene Callahan]’s doing that you think deserves criticism.
Exactly what I referred to in the previous paragraph.
it’s [up to] those who are aware of the classics’ insights to understand and present them where applicable.
Callahan is, supposedly, aware of these classics’ insights. Did he present them where applicable? Show evidence he understands them? No. Every time he drops the name of a great author or a classic, he fails to put the argument in his own words, sketch it out, or show its applicability to the arguments under discussion.
For example, he drops the remark that “Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]” as if it were conclusively settled. Then, when I explain why this can’t possibly be the case, Callahan is unable to provide any further elaboration of why that is (and I couldn’t find a reference to it anywhere).
The problem, I contend, is therefore on his end. To the extent that Callahan’s list of classics is relevant, and that he is a majestic bearer of this deep, hard-won knowledge, he is unable to actually show how the classics are relevant, and what amazing arguments are presented in them that obviate our discussion. The duty falls on him to make them relevant, not for everyone else to just go out and read everything he has, just because he thinks, in all his gullible wisdom, that it will totally convince us.
Note: I wasn’t alone in noticing Callahan’s refusal to engage. Another poster remarked:
Gene, The problems with appeals to authority are, 1) as you point out, not everyone may be familiar with the work of the authority, 2) the ‘authority’ may actually not be one (see Silas’ comments on crystallography), and 3) it’s a substitute for actually making an argument. It’s easy, and pointless, to simply say ‘other people have shown you’re wrong’. But if you present an argument then we can discuss it’s merits and flaws. …
See, that’s how discussion works. If you have a position, just explain it! Then we can talk about it.
With regard to the books you mention: what little I have read about them, they aren’t impressive or promising. For example, Feyerabend seems to think he has some great insight that good scientific theories don’t have to incorporate the old theory, but rather, the normally make progress by ignoring the old. But he’s attacking a strawman: new theories aren’t expected to incorporate the old theory, just to be able to make the same predictions. [EDIT: Sorry, original version didn’t have the complete sentence.]
Also, people like to make a big deal about how clever Quine’s holism argument is, but if you’re at all familiar with Bayesianism, you roll your eyes at it. Yes, theories can’t be tested in isolation, but Bayesian inference can tell you which beliefs are most strongly weakened by which evidence, showing that you have a basis for saying which theory was, in effect, tested by the observations.
Things like these make me skeptical of those who claim that these philosophers have something worthwhile to say to me about science. I would rather focus on reading the epistemology of those who are actually making real, unfakeable, un-groupthinkable progress, like Sebastian Thrun and Judea Pearl.
I think Lakatos, Proofs and Refutations is a fun book, but the chief thing I learned from it is that mathematical proofs aren’t absolutely true, even when there is no error in reasoning. It’s about mathematics, not science. It’s also quite short, particularly if you skip the second, much more mathematically-involved dialogue.
I learned the opposite: that mathematical proofs can be and should be absolutely true. When they fall short, it is a sign that some confusion still remains in the concepts.
I said mathematical proofs aren’t absolute because mathematical proofs and refutations are subject to philosophical, linguistic debate—argument about whether the proof fits the concept being played with, argument which can result in (for example) proof-constructed definitions. During this process, one might say that the original proof or refutation is correct, but no longer appropriate, or that the original proof is incorrect. Neither statement implies different behavior.
For example, he drops the remark that “Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]” as if it were conclusively settled.
You’re basically doing the same when you name-drop “a Bayesian revival in the sciences”. I’ve been here for months trying to figure out what the hell people mean by “Bayesian” and frankly feel little the wiser. It’s interesting to me, so I keep digging, but clearly explained? Give me a break. :)
I found Polanyi somewhat obscure (all that I could conclude from Personal Knowledge was that I was totally devoid of spiritual knowledge), so I won’t defend him. But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a “methodological rule of science”, can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed “methodology”.
As an example being impartial certainly isn’t required to do good science; you can start out having a hunch and being damn sure your hunch is correct, and the energy to devise clever ways to turn your hunch into a workable theory lets you succeed where others don’t even acknowledge there is a problem to be solved. Semmelweis seems to be a good example of an opinionated scientist. Or maybe Seth Roberts.
You’re basically doing the same when you name-drop “a Bayesian revival in the sciences”.
That’s not remotely the same thing—I wasn’t bringing that up as some kind of substantiation for any argument, while Callahan was mentioning the thing about “a priori crystallography” (???) as an argument.
But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a “methodological rule of science”, can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed “methodology”.
So? I was arguing about what deserves to be called science, not what happens to be called science. And yes, people practice “ideal science” imperfectly, but that’s no evidence against the validity of the ideal, any more than it’s a criticism of circles that no one ever uses a perfect one. Furthermore, every time someone points to one of these counterexamples, it happens to be at best a strawman view. Like what you do here:
As an example being impartial certainly isn’t required to do good science; you can start out having a hunch and being damn sure your hunch is correct, …
The claim isn’t that you have to be impartial, but that you must adhere to a method that will filter out your partiality. That is, there has to be something that can distinguish your method from groupthink, from decreeing something true merely because you have a gentleman’s agreement not to contradict it.
You’d need to spell out more precisely what he’s doing that you think deserves criticism.
Interestingly I seem to have read quite a few of the “classics” that come up in that discussion on “what science does”. Polanyi’s Personal Knowledge, Feyerabend’s Against Method, Lakatos’ Proofs and Refutations, Kuhn’s Structure of Scientific Revolutions. Not Popper however—I’ve read The Open Society but not his other works.
Given your stance on “explaining” those strike me as good examples of the kind of stuff you might want to have read because that would leave you in a better position to criticize what you’re criticizing: less prone to misrepresenting it. (As for me, I’m now investing a lot of time and energy into this “Bayesian” stuff, which definitely is sort of a counterpoint to my prior leanings.)
Exactly what I referred to in the previous paragraph.
Callahan is, supposedly, aware of these classics’ insights. Did he present them where applicable? Show evidence he understands them? No. Every time he drops the name of a great author or a classic, he fails to put the argument in his own words, sketch it out, or show its applicability to the arguments under discussion.
For example, he drops the remark that “Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]” as if it were conclusively settled. Then, when I explain why this can’t possibly be the case, Callahan is unable to provide any further elaboration of why that is (and I couldn’t find a reference to it anywhere).
The problem, I contend, is therefore on his end. To the extent that Callahan’s list of classics is relevant, and that he is a majestic bearer of this deep, hard-won knowledge, he is unable to actually show how the classics are relevant, and what amazing arguments are presented in them that obviate our discussion. The duty falls on him to make them relevant, not for everyone else to just go out and read everything he has, just because he thinks, in all his gullible wisdom, that it will totally convince us.
Note: I wasn’t alone in noticing Callahan’s refusal to engage. Another poster remarked:
Also, people like to make a big deal about how clever Quine’s holism argument is, but if you’re at all familiar with Bayesianism, you roll your eyes at it. Yes, theories can’t be tested in isolation, but Bayesian inference can tell you which beliefs are most strongly weakened by which evidence, showing that you have a basis for saying which theory was, in effect, tested by the observations.
Things like these make me skeptical of those who claim that these philosophers have something worthwhile to say to me about science. I would rather focus on reading the epistemology of those who are actually making real, unfakeable, un-groupthinkable progress, like Sebastian Thrun and Judea Pearl.
I think Lakatos, Proofs and Refutations is a fun book, but the chief thing I learned from it is that mathematical proofs aren’t absolutely true, even when there is no error in reasoning. It’s about mathematics, not science. It’s also quite short, particularly if you skip the second, much more mathematically-involved dialogue.
I learned the opposite: that mathematical proofs can be and should be absolutely true. When they fall short, it is a sign that some confusion still remains in the concepts.
I see no contradiction between these interpretations. :P
If they’re never absolutely true (your interpretation), how can they ever be absolutely true (my interpretation)?
I said mathematical proofs aren’t absolute because mathematical proofs and refutations are subject to philosophical, linguistic debate—argument about whether the proof fits the concept being played with, argument which can result in (for example) proof-constructed definitions. During this process, one might say that the original proof or refutation is correct, but no longer appropriate, or that the original proof is incorrect. Neither statement implies different behavior.
You’re basically doing the same when you name-drop “a Bayesian revival in the sciences”. I’ve been here for months trying to figure out what the hell people mean by “Bayesian” and frankly feel little the wiser. It’s interesting to me, so I keep digging, but clearly explained? Give me a break. :)
I found Polanyi somewhat obscure (all that I could conclude from Personal Knowledge was that I was totally devoid of spiritual knowledge), so I won’t defend him. But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a “methodological rule of science”, can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed “methodology”.
As an example being impartial certainly isn’t required to do good science; you can start out having a hunch and being damn sure your hunch is correct, and the energy to devise clever ways to turn your hunch into a workable theory lets you succeed where others don’t even acknowledge there is a problem to be solved. Semmelweis seems to be a good example of an opinionated scientist. Or maybe Seth Roberts.
What’s your take on string theorists? ;)
That’s not remotely the same thing—I wasn’t bringing that up as some kind of substantiation for any argument, while Callahan was mentioning the thing about “a priori crystallography” (???) as an argument.
So? I was arguing about what deserves to be called science, not what happens to be called science. And yes, people practice “ideal science” imperfectly, but that’s no evidence against the validity of the ideal, any more than it’s a criticism of circles that no one ever uses a perfect one. Furthermore, every time someone points to one of these counterexamples, it happens to be at best a strawman view. Like what you do here:
The claim isn’t that you have to be impartial, but that you must adhere to a method that will filter out your partiality. That is, there has to be something that can distinguish your method from groupthink, from decreeing something true merely because you have a gentleman’s agreement not to contradict it.