Huh, that isn’t at all what I would mean by “reductionist epistemology”. For me it would be something like “explain complicated phenomena in terms of simpler, law-based parts”. (E.g. explain motion of objects through billiard ball physics; explain characteristics and behavior of organisms through the theory of evolution, etc.) Looking at the simplest individual cases can be a recipe for success at reductionist epistemology, but as you point out, often it is not.
For your first example, it still seems like Bayesianism is better than anything else out there—the fact that we agree that there is something to the concept of “97% true” just means that there is still more to be done.
For your second example, I would say it’s a success of reductionist epistemology: the best answer I know of to it is Solomonoff induction, which posits the existence of hypotheses and then uses Bayesian updating. (Or perhaps you prefer logical induction, which involves a set of traders and finding prices that are inexploitable.) There’s plenty of reasons to be unsatisfied with Solomonoff induction, but I like it more than anything else out there, and it seems like a central example of reductionist epistemology.
I agree that the third example is a reasonable attempt at doing reductionist epistemology, and I’d say it didn’t work out. I don’t think it’s quite as obvious ex ante as you seem to think that it was destined to fail. But mostly I just want to say that of course some attempts to do reductionist epistemology are going to be wrongheaded and fail; reductionist epistemology is much more the claim that whatever succeeds will look like “explaining complicated phenomena in terms of simpler, law-based parts”. (This is similar to how Science says much about how hypotheses can be falsified, but doesn’t say much about how to find the correct hypotheses in the first place.)
I also like what little of systems theory I’ve read, and it seems quite compatible with (my version of) reductionist epistemology. Systems theory talks about various “levels of organization”, where the concepts that make sense of each level are very different from each other, and high-level concepts are “built out of” lower-level concepts. I think systems theory is usually concerned with the case where the low-level concepts + laws are known but the high-level ones are not (e.g. chaos theory) or where both levels are somewhat known but it’s not clear how they relate (e.g. ecosystems, organizations), whereas reductionist epistemology is concerned with the case where we have a confusing set of observations, and says “let’s assume our current concepts are high-level concepts, and invent a set of low-level concepts + rules that explain the high-level concepts” (e.g. atoms invented to explain various chemical reactions, genes invented to explain the Mendelian pattern, Bayesianism invented to explain various aspects of “good reasoning”).
the fact that we agree that there is something to the concept of “97% true” just means that there is still more to be done
My point is, specifically, that being overly reductionist has made it harder for people to do that work, because they keep focusing on atomic propositions, about which claims like “97% true” are much less natural.
For your second example, I would say it’s a success of reductionist epistemology
In this case, Solomonoff induction is less reductionist than the alternative, because it postulates hypotheses over the whole world (aka things like laws of physics), rather than individual claims about it (like “these billiard balls will collide”).
I don’t think it’s quite as obvious ex ante as you seem to think that it was destined to fail
Oh yeah, I don’t think it was obvious ex ante. But insofar as it seems like reductionism about epistemology fails more often than reductionism about other things, that seems useful to know.
reductionist epistemology is concerned with the case where we have a confusing set of observations, and says “let’s assume our current concepts are high-level concepts, and invent a set of low-level concepts + rules that explain the high-level concepts” (e.g. atoms invented to explain various chemical reactions, genes invented to explain the Mendelian pattern, Bayesianism invented to explain various aspects of “good reasoning”).
In hindsight I should have said “reductionism about epistemology”, since I’m only talking about applying reductionism to epistemology itself, not the epistemological strategy of applying reductionism to some other domain. I’ve changed the title to clarify, as well as talking about “some limitations” of it rather than being against the thing overall.
The Oxford Companion to Philosophy suggests that reductionism is “one of the most used and abused terms in the philosophical lexicon” and suggests a three part division.
Ontological reductionism: a belief that the whole of reality consists of a minimal number of parts.
Methodological reductionism: the scientific attempt to provide explanation in terms of ever smaller entities.
Theory reductionism: the suggestion that a newer theory does not replace or absorb an older one, but reduces it to more basic terms. Theory reduction itself is divisible into three parts: translation, derivation and explanation.[4]”—WP
Huh, that isn’t at all what I would mean by “reductionist epistemology”. For me it would be something like “explain complicated phenomena in terms of simpler, law-based parts”. (E.g. explain motion of objects through billiard ball physics; explain characteristics and behavior of organisms through the theory of evolution, etc.) Looking at the simplest individual cases can be a recipe for success at reductionist epistemology, but as you point out, often it is not.
For your first example, it still seems like Bayesianism is better than anything else out there—the fact that we agree that there is something to the concept of “97% true” just means that there is still more to be done.
For your second example, I would say it’s a success of reductionist epistemology: the best answer I know of to it is Solomonoff induction, which posits the existence of hypotheses and then uses Bayesian updating. (Or perhaps you prefer logical induction, which involves a set of traders and finding prices that are inexploitable.) There’s plenty of reasons to be unsatisfied with Solomonoff induction, but I like it more than anything else out there, and it seems like a central example of reductionist epistemology.
I agree that the third example is a reasonable attempt at doing reductionist epistemology, and I’d say it didn’t work out. I don’t think it’s quite as obvious ex ante as you seem to think that it was destined to fail. But mostly I just want to say that of course some attempts to do reductionist epistemology are going to be wrongheaded and fail; reductionist epistemology is much more the claim that whatever succeeds will look like “explaining complicated phenomena in terms of simpler, law-based parts”. (This is similar to how Science says much about how hypotheses can be falsified, but doesn’t say much about how to find the correct hypotheses in the first place.)
I also like what little of systems theory I’ve read, and it seems quite compatible with (my version of) reductionist epistemology. Systems theory talks about various “levels of organization”, where the concepts that make sense of each level are very different from each other, and high-level concepts are “built out of” lower-level concepts. I think systems theory is usually concerned with the case where the low-level concepts + laws are known but the high-level ones are not (e.g. chaos theory) or where both levels are somewhat known but it’s not clear how they relate (e.g. ecosystems, organizations), whereas reductionist epistemology is concerned with the case where we have a confusing set of observations, and says “let’s assume our current concepts are high-level concepts, and invent a set of low-level concepts + rules that explain the high-level concepts” (e.g. atoms invented to explain various chemical reactions, genes invented to explain the Mendelian pattern, Bayesianism invented to explain various aspects of “good reasoning”).
My point is, specifically, that being overly reductionist has made it harder for people to do that work, because they keep focusing on atomic propositions, about which claims like “97% true” are much less natural.
In this case, Solomonoff induction is less reductionist than the alternative, because it postulates hypotheses over the whole world (aka things like laws of physics), rather than individual claims about it (like “these billiard balls will collide”).
Oh yeah, I don’t think it was obvious ex ante. But insofar as it seems like reductionism about epistemology fails more often than reductionism about other things, that seems useful to know.
In hindsight I should have said “reductionism about epistemology”, since I’m only talking about applying reductionism to epistemology itself, not the epistemological strategy of applying reductionism to some other domain. I’ve changed the title to clarify, as well as talking about “some limitations” of it rather than being against the thing overall.
Ah, I’m much more on board with “reductionism about epistemology” having had limited success, that makes sense.
This is a classic problem. “Reductionism” means several different related things in philosophy.
The Oxford Companion to Philosophy suggests that reductionism is “one of the most used and abused terms in the philosophical lexicon” and suggests a three part division.
Ontological reductionism: a belief that the whole of reality consists of a minimal number of parts.
Methodological reductionism: the scientific attempt to provide explanation in terms of ever smaller entities.
Theory reductionism: the suggestion that a newer theory does not replace or absorb an older one, but reduces it to more basic terms. Theory reduction itself is divisible into three parts: translation, derivation and explanation.[4]”—WP