I don’t understand Popper’s work beyond the Wikipedia summary of critical rationalism
FYI that won’t work. Wikipedia doesn’t understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages).
without value
I posted criticisms of Jaynes’ arguments (or more accurately, his assumptions). I posted an argument about support. Why don’t you answer it?
You just have to get over the arbitrariness.
You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it!
Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books.
If you want somewhere to start online, you could read
That is not primarily what we want. And what you’re doing here is conflating Bayes’ theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes’ theorem to epistemological problems, rather than to the math behind betting).
To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts,
Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or:
It seems like the only possible room for debate is the choice of prior.
Are you so sure of yourself—that you are right about many things—that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn’t have, such as not having arbitrary foundations.
What you’re doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that’s bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes’s theorem at all).
Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper’s epistemology? Someone should have done it, right? And if no one ever has, then you should be interested in investigating, right? And also interested in investigating what is wrong with your movement that it never addressed rival ideas in scholarly debate. (I have looked for such a criticism. Never managed to find one.)
Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper’s epistemology? Someone should have done it, right?
Most things in the space of possible documents can’t be refuted, because they don’t correspond to anything refutable. They are simply confused, and irredeemably.
In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don’t have to spend time thinking about it, because it is solved. I would not generally criticize a rival’s ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.
Are you so sure of yourself—that you are right about many things—that you will dismiss all rival ideas without even having to know what they say?
Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn’t object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted—whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
Really this entire discussion comes down to what we want out of epistemology.
That [guiding betting] is not primarily what we want.
What do you want? I don’t understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:
I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.
Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?
You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper’s ideas aren’t nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.
Most things in the space of possible documents can’t be refuted, because they don’t correspond to anything refutable. They are simply confused, and irredeemably.
You don’t think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).
But you haven’t provided any argument that Popper in particular was confused, irrefutable, or whatever. I don’t know about you, but as someone who wants to improve my epistemological knowledge I think it’s important to consider all the major ideas in the field at the very least enough to know one good criticism of each.
Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you’re done with thinking, you have the final truth, and that’s that..?
The only sort of argument which warrants response is an objection to my current definitive answer.
Popper published several of those. Where’s the response from Bayesians?
One thing to note is it’s hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted—whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested
Popper solved that problem.
I am fairly confident the problem has no solution for what are essentially obvious reasons
The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called “justificationism” by Popperians, and are criticized in detail. I think you ought to be interested in this.
One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn’t exist automatically but is being created by your own assumptions.
Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
What are you talking about? You haven’t read his books and claim he didn’t give enough detail? He was something of a workaholic who didn’t watch TV, didn’t have a big social life, and worked and wrote all the time.
What do you want?
To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I’m not assuming you actually are a positivist, but I’m not really seeing much difference yet.)
To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world.
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don’t think Bayesianism addresses this well.
Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don’t think Bayesianism addresses this well.
Given well defined contexts and meanings for good and bad I don’t see why Bayesianism could not be effectively applied to to moral problems.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
You’ve repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you’ve shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let’s work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Consider that you are replying to a statement I just said that all you’ve done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don’t.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Sorry. I have no idea who is who. Don’t mind me.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
The Popperian method is universal.
if Bayesianism is in some sense Turing complete then it can be used to do all of this
Well, umm, yes but that’s no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don’t know how to make it do that stuff. Epistemology should help us.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
No problem, I’m just pointing out that there are other perspectives out here.
The Popperian method is universal.
Sure, in the sense it is Turing complete; but that doesn’t make it the most efficient approach for all cases. For example I’m not going to use it to decide the answer to the statement “2 + 3”, it is much more efficient for me to use the arithmetic abstraction.
But we don’t know how to make it do that stuff. Epistemology should help us.
Agreed, it is one of the reasons that I am actively working on epistemology.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
Example or details?
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as “good” or “bad” based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to—which i think is all of them, but that doesn’t matter to universality).
Not in the sense that it’s Turing complete so you could, by a roundabout way and using whatever methods, do anything.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you’re trying to get rigor from math. But Popper saved philosophy. (And most people didn’t notice.) Example:
With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
You have very limited ambitious. You’re trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won’t answer them, it’s hopeless.
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to—which i think is all of them, but that doesn’t matter to universality).
Perhaps I don’t understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean… If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in “justificationist epistemologies”… i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you’re trying to get rigor from math. But Popper saved philosophy. (And most people didn’t notice.) Example:
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk—Popper didn’t fix this and neither will Bayesianism, it is more of a people problem—but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper’s approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
You have very limited ambitious. You’re trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won’t answer them, it’s hopeless.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like “what is moral objectively?” is easy. Nothing is “moral objectively”. Meaning is created within contexts of assessment; if you want to know if something is “moral” you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of “moral”.
But to respond to what I think you mean… If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems;
Minor nitpick at least capable of modeling any Turing machine, not Turing complete. For example, something that had access to some form of halting oracle would be able to do more than a Turing machine.
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don’t.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
Saying your epistemology has a “necessary flaw” is an admission of defeat, that it doesn’t work. The “necessary flaw” is unavoidable if you are committed to the justificationist way of thinking. Popper saw that the whole idea of justification is wrong and he offered a different idea to replace it—an idea with no known flaws. You criticize Popper for being underspecified, yet he elaborated on his ideas in many books. And, furthermore, no amount of mathematical precision or formalism will paper over cracks in justificationist epistemologies.
Saying your epistemology has a “necessary flaw” is an admission of defeat,
In this case, its recognition of reality. I repeat that I would like to defer this conversation until we have something concrete to disagree about. Until then I don’t care about that difference.
The “necessary flaw” arises because all justificationist epistemologies lead to infinite regress or circular arguments or appeals to authority (or even sillier things). That you think there is no alternative to justificationism and I don’t is something concrete we disagree about.
It’s interesting how different Bayesians say different things. They don’t seem to all agree with each other even about their basic claims. Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven’t sorted out these internal disputes.
Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven’t sorted out these internal disputes.
There are disputes among the Bayesians. But you are confusing different issues. First, the presence of internal disputes about the borders of an idea is not a priori a problem with an idea that is in progress. The fact that evolutionary biologists disagree about how much neutral drift matters isn’t a reason to reject evolution. (It is possible that I’m reading an unintended implication here.)
Moreover, most of what you are talking about here are not contradictions but failure to understand. That Bayesianism has flaws is a distinct claim from when someone talks about something like Cox’s theorem which is the sort of result that Bayesians are talking about that you refer to as “Sometimes Bayesianism is proved”(which incidentally is a terribly unhelpful and vague way of discussing the point). The point of results like Cox’s theorem is that if one very broad attempts under certain very weak assumptions to formalize epistemology you must end up with some form of Bayesianism. At the same time it is important to keep in mind that this isn’t saying all that much. It doesn’t for example say anything about what one’s priors should be. Thus one has the classical disagreement between objective and subjective Bayesians based on what sort of priors to use (and within each of those there is further breakdown. LessWrong seems to mainly have objective Bayesians favoring some form Occam prior, although just what is not clear.) Similarly, when discussing whether or not Bayesianism is compatible with Popper depends a lot on what one means by “Bayesianism”, “compatible” and “Popper”. Bayesianism is certainly not compatible with a naive-Popperian approach, which is what many are talking about when they say that it is not compatible (and as you’ve already noted Popper himself wasn’t a naive Popperian). But some people use Popper to mean the idea that given an interesting hypothesis one should search out for experiments which would be likely to falsify the hypothesis if it is false (an idea that actually predates Popper) but what one means by falsify can be a problem.
Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone’s particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected.
In particular, if you ever try to make a claim of the form “You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief” then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision.
In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too?
Given that I do make claims on my website, I wonder why you don’t pick one and point out something you think is wrong with it.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.
If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments—its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don’t think this is necessarily a flaw with your website—presumably it was not designed first and foremost as a response to Bayesianism—but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.
To be clear, what I am looking for is a statement of the form: “Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage.” Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been “wrong” where Popper would be clearly “right” at any historical point would be good enough to argue about.
statement of the form: “Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage I would not believe such a thing because of my improved epistemology.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren’t because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn’t mean we agree or have nothing to discuss.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn’t solve your problem, whereas my epistemology did solve mine.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn’t solve your problem, whereas my epistemology did solve mine.
It doesn’t take Popperian epistemology to learn social fluency. I’ve learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
That’s a claim that only makes sense in certain epistemological systems...
I don’t have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Hmm? I’m not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That’s not to say I understand how they think it is implied/ok in a Popperian framework).
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
I have considered that. Popperian epistemology helps with these issues more. I don’t want to argue about that now because it is an advanced topic and you don’t know enough about my epistemology to understand it (correct me if I’m wrong), but I thought the example could help make a point to the person I was speaking to.
If I don’t understand your explanation and am interested in it, I’m prepared to do the research in order to understand it, but if you can only assert why your epistemology should result in better social learning and not demonstrate that it does so for people in general, I confess that I will probably not be interested enough to follow up.
I will note though, that stating the assumption that another does not understand, but leaving them free to correct you, strikes me as a markedly worse way to minimize conflict and aggression than asking if they have the familiarity necessary to understand the explanation.
I studied philosophy as part of a double major (which I eventually dropped because of the amount of confusion and sophistry I was being expected to humor,) and my acquaintance with Popper, although not as deep as yours, I’m sure, precedes my acquaintance with Bayes. Although it may be that others who I have not read better presented and refined his ideas, Popper’s philosophy did not particularly impress me, whereas the ideas presented by Bayesianism immediately struck me as deserving of further investigation. It’s possible that I haven’t given Popper his fair shakes, but it’s not for lack of interest in other epistemologies that I’ve come to identify as Bayesian.
I wouldn’t describe the link as unhelpful, exactly, but I also wouldn’t say that it’s among the best advice for controlling one’s emotions that I’ve received (this was a process I put quite a bit of effort into learning, and I’ve received a fair amount,) so I don’t see how it functions as a demonstration of the superiority of Popperian epistemology.
With regards to the link, it’s simply that it’s less in depth than other advice I’ve received. There are techniques that it doesn’t cover in meaningful detail, like manipulation of cognitive dissonance (habitually behaving in certain ways to convince yourself to feel certain ways,) or recognition of various cognitive biases which will alter our feelings. It’s not that bad as an introduction, but it could do a better job opening up connections to specific techniques to practice or biases to be aware of.
Popper didn’t impress me because it simply wasn’t apparent to me that he was establishing any meaningful improvements to how we go about reasoning and gaining information. Critical rationalism appeared to me to be a way of looking at how we go about the pursuit of knowledge, but to quote Feynman, “Philosophy of science is about as useful to scientists as ornithology is to birds.” It wasn’t apparent to me that trying to become more Popperian should improve the work of scientists at all; indeed, in practice it is my observation that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence are more likely to make significant blunders.
Attempting to become more Bayesian in one’s epistemology, on the other hand, had immediately apparent benefits with regards to conducting science well (which are are discussed extensively on this site.)
I had criticisms of Popper’s arguments to offer, and could probably refresh my memory of them by revisiting his writings, but the deciding factor which kept me from bothering to read further was that, like other philosophers of science I had encountered, it simply wasn’t apparent that he had anything useful to offer, whereas it was immediately clear that Bayesianism did.
Feynman meant normal philosophers of science. Including, I think, Bayesians. He didn’t mean Popper, who he read and appreciated. Feynman himself engaged in philosophy of science, and published it. It’s academic philosophers, of the dominant type, that he loathed.
that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence
That’s not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn’t actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what “support” means (like in the phrase “supporting evidence”) and tell me how support differs from consistency.
The biggest thing Popper has to offer is the solution to justificationism which has plagued almost everyone’s thinking since Aristotle. You won’t know quite what that is because it’s an unconscious bias for most people. In short it is the idea that theories should be supported/justified/verified/proven, or whatever, whether probabilistically or not. A fraction of this is: he solved the problem of induction. Genuinely solved it, rather than simply giving up and accepting regress/foundations/circularly/whatever.
That’s not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn’t actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what “support” means (like in the phrase “supporting evidence”) and tell me how support differs from consistency.
I’ve read his arguments for this, I simply wasn’t convinced that accepting it in any way improved scientific conduct.
“Support” would be data in light of which the subjective likelihood of a hypothesis is increased. If consistency does not meaningfully differ from this with respect to how we respond to data, can you explain why it is is more practical to think about data in terms of consistency than support?
I’d also like to add that I do know what justificationism is, and your tendency to openly assume deficiencies in the knowledge of others is rather irritating. I normally wouldn’t bother to remark upon it, but given that you posed a superior grasp of socially effective debate conduct as evidence of the strength of your epistemology, I feel the need to point out that I don’t feel like you’re meeting the standards of etiquette I would expect of most members of Less Wrong.
I’ve read his arguments for this, I simply wasn’t convinced that accepting it in any way improved scientific conduct.
Yet again you disagree with no substantive argument. If you don’t have anything to say, why are you posting?
can you explain why it is is more practical to think about data in terms of consistency than support?
Well, consistency is good as far as it goes. If we see 10 white swans, we should reject “all swans are black” (yes, even this much depends on some other stuff). Consistency does the job without anything extraneous or misleading.
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn’t true, except in special cases which aren’t important.
The way Popper improves on this is by noting that there are always many hypotheses consistent with the data. Saying their likelihood increases is pointless. It does not help deal with the problem of differentiating between them. Something else, not support, is needed. This leaves the concept of support with nothing useful to do, except be badly abused in sloppy arguments (I have in mind arguments I’ve seen elsewhere. Lots of them. What people do is they find some evidence, and some theory it is consistent with, and they say the theory is supported so now they have a strong argument or whatever. And they are totally selective about this. You try to tell them, “well, theory is also consistent with the data. so it’s supported just as much. right?” and they say no, theirs fits the data better, so it’s supported more. but you ask what the difference is, and they can’t tell you because there is no answer. the idea that a theory can fit the data better than another, when both are consistent with the data, is a mistake (again there are some special cases that don’t matter in practice).)
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn’t true, except in special cases which aren’t important.
Suppose I ask a woman if she has children. She says no.
This is supporting evidence for the hypothesis that she does not have children; it raises the likelihood from my perspective that she is childless.
It is entirely consistent with the hypothesis that she has children; she would simply have to be lying.
So it appears to me that in this case, whatever arguments you might make regarding induction, viewing the data in terms of consistency does not inform my behavior as well.
This is the standard story. It is nothing but an appeal to intuition (and/or unstated background knowledge, unstated explanations, unstated assumptions, etc). There is no argument for it and there never has been one.
Refuting this common mistake is something important Popper did.
Try reading your post again. You simply assumed that her having children is more likely. That is not true from the example presented, without some unstated assumptions being added. There is no argument in your post. That makes it very difficult to argue against because there’s nothing to engage with.
It could go either way. You know it could go either way. You claim one way fits the data better, but you don’t offer any rigorous guidelines (or anything else) for figuring out which way fits better. What are the rules to decide which consistent theories are more supported than others?
Of course it could go either way. But if I behaved in everyday life as if it were equally likely to go either way, I would be subjecting myself to disaster. For practical purposes it has always served me better to accept that certain hypotheses that are consistent with the available data are more probable than others, and while I cannot prove that this makes it more likely that it will continue to do so in the future, I’m willing to bet quite heavily that it will.
If Popper’s epistemology does not lead to superior results to induction, and at best, only reduces to procedures that perform as well, then I do not see why I should regard his refutation of induction as important.
Then you have your answer: Support is non-boolean. I don’t think a boolean concept of consistency of observations with anything makes sense, though (consistent would mean P(E|H)>0, but observations never have a probability of 0 anyway, so every observation would be consistent with everything, or you’d need an arbitrary cut-off. P(observe black sheep|all sheep are white) > 0, but is very small ).
Some theories predict that some things won’t happen (0 probability). I consider this kind of theory important.
You say I have my answer, but you have not answered. I don’t think you’ve understood the problem. To try to repeat myself less, check out the discussion here, currently at the bottom:
Some theories predict that some things won’t happen (0 probability). I consider this kind of theory important.
But they don’t predict that you won’t hallucinate, or misread the experimental data, or whatever. Some things not happening doesn’t mean some things won’t be observed.
You say I have my answer, but you have not answered.
You asked how support differed form consistent. Boolean vs real number is a difference. Even if you arbitrarily decide that real numbers are not allowed and only booleans are that doesn’t mean that differentiating between their use of real numbers and your use of booleans is inconsistent on part of those who use real numbers.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference?
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can’t listen to an mp3 right now.
My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren’t the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn’t? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren’t because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn’t mean we agree or have nothing to discuss.
I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don’t yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian’s willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
Could you explain how a Popperian disputes such an assertion? [(50% probability of humanity surviving the next century)]
e.g. by pointing out that whether we do or don’t survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can’t make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There’s no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. “yes”. Actually, they make many conjectures, e.g. also “no”. Then they criticize the conjectures, and make more conjectures. So for example I would criticize “yes” for not providing enough explanatory detail about why it’s a good idea. Thus “yes” would be rejected, but a variant of it like “yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites” would be better. If I didn’t understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the “no” answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
You seem to be arguing that Bayesianism is wrong, which is a very different thing.
I think it’s wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes’ theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don’t want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Would you never take a bet?
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it’s not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian’s willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
So, how do Popperians decide? They conjecture an answer, e.g. “yes”. Actually, they make many conjectures, e.g. also “no”. Then they criticize the conjectures, and make more conjectures. So for example I would criticize “yes” for not providing enough explanatory detail about why it’s a good idea. Thus “yes” would be rejected, but a variant of it like “yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites” would be better. If I didn’t understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the “no” answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you’ve described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don’t actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That’s fine, but then it’s not clear that you’ve done anything different from what a Bayesian would have done—you’ve simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
[option 1] since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other.
But some are criticized and some aren’t.
[option 2] conjecture that best weathered the criticisms you were able to muster
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here—the English language is not well adapted to expressing these ideas. (In particular, the concept “uncriticized” is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Or is it something radically different from these two altogether?
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won’t have those mistakes, we learn and improve our knowledge.
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can’t reasonably guarantee that I will not have later objections as well before we’ve even had the discussion!
So let me see if I’m understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we—I don’t want to say “assume that it’s true,” because that’s probably not correct—we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I’m not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, “in order to make a decision, we need to have a guiding theory which is currently impervious to criticism” (my current understanding of Popper’s idea, roughly illustrated), isn’t this just another way of saying: “the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?”
In short, isn’t imperviousness to criticism a type of justification in itself?
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology.
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
Is that basically right?
That is the general idea (but incomplete).
The reason we behave as if it’s true is that it’s the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn’t want to act on an idea that we (thought we) saw a mistake in, over one we don’t think we see any mistake with—we should use what (fallible) knowledge we have.
A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general).
One reason that not being criticized isn’t a justification is that saying it is gets you a regress problem. So let’s not say that! The other reason is: what would that be adding as compared with not saying it? It’s not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those).
Terminology isn’t terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don’t like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it “justified” it doesn’t matter so much.
Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it’s not clear that this discussion warrants an entirely new topic.
Terminology isn’t terribly important . . . If you want to take the Popperian conception of a good theory and label it “justified” it doesn’t matter so much.
Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by “justificationist” systems.
It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper’s system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper’s system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate.
According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question—but as you can see, the regress has begun.
We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question—but as you can see, the regress has begun.
No regress has begun. I already answered why:
The reason we behave as if it’s true is that it’s the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn’t want to act on an idea that we (thought we) saw a mistake in, over one we don’t think we see any mistake with—we should use what (fallible) knowledge we have.
Try to regress me.
It is possible, if you want, to create a regress of some kind which isn’t the same one and isn’t important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won’t regard it as a real regress problem of the same type. You’ll probably wonder how that’s evaluated, but, well, it’s not such a big deal. We’ll quickly get to the point where your attempts to create regress look silly to you. That’s different than the regresses inductivists face where it’s the person trying to defend induction who runs out of stuff to say.
And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge.
You’re equivocating between “knowing exactly the contents of the new knowledge”, which may be impossible for the reason you describe, and “know some things about the effect of the new knowledge”, which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.
what you’re doing here is conflating Bayes’ theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes’ theorem to epistemological problems, rather than to the math behind betting).
That’s because to a Bayesian, these things are the same thing. Epistemology is all about probability—and visa versa. Bayes’s theorem includes induction and confirmation. You can’t accept Bayes’s theorem and reject induction without crazy inconsistency—and Bayes’s theorem is just the math of probability theory.
If I understand correctly, I think curi is saying that there’s no reason for probability and epistemology to be the same thing. That said, I don’t entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these “epistemological problems” that Popper solves but Bayes doesn’t?
FYI that won’t work. Wikipedia doesn’t understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages).
I posted criticisms of Jaynes’ arguments (or more accurately, his assumptions). I posted an argument about support. Why don’t you answer it?
You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it!
Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books.
If you want somewhere to start online, you could read
http://fallibleideas.com/
That is not primarily what we want. And what you’re doing here is conflating Bayes’ theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes’ theorem to epistemological problems, rather than to the math behind betting).
Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or:
Are you so sure of yourself—that you are right about many things—that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn’t have, such as not having arbitrary foundations.
What you’re doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that’s bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes’s theorem at all).
Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper’s epistemology? Someone should have done it, right? And if no one ever has, then you should be interested in investigating, right? And also interested in investigating what is wrong with your movement that it never addressed rival ideas in scholarly debate. (I have looked for such a criticism. Never managed to find one.)
Most things in the space of possible documents can’t be refuted, because they don’t correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don’t have to spend time thinking about it, because it is solved. I would not generally criticize a rival’s ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.
Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn’t object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted—whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
Really this entire discussion comes down to what we want out of epistemology.
What do you want? I don’t understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:
I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.
Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?
You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper’s ideas aren’t nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.
You don’t think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).
But you haven’t provided any argument that Popper in particular was confused, irrefutable, or whatever. I don’t know about you, but as someone who wants to improve my epistemological knowledge I think it’s important to consider all the major ideas in the field at the very least enough to know one good criticism of each.
Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you’re done with thinking, you have the final truth, and that’s that..?
Popper published several of those. Where’s the response from Bayesians?
One thing to note is it’s hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)
Popper solved that problem.
The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called “justificationism” by Popperians, and are criticized in detail. I think you ought to be interested in this.
One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn’t exist automatically but is being created by your own assumptions.
What are you talking about? You haven’t read his books and claim he didn’t give enough detail? He was something of a workaholic who didn’t watch TV, didn’t have a big social life, and worked and wrote all the time.
To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I’m not assuming you actually are a positivist, but I’m not really seeing much difference yet.)
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don’t think Bayesianism addresses this well.
Neither. You can and should do better!
Given well defined contexts and meanings for good and bad I don’t see why Bayesianism could not be effectively applied to to moral problems.
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don’t.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
You’ve repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you’ve shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let’s work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Consider that you are replying to a statement I just said that all you’ve done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Sorry. I have no idea who is who. Don’t mind me.
The Popperian method is universal.
Well, umm, yes but that’s no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don’t know how to make it do that stuff. Epistemology should help us.
Example or details?
No problem, I’m just pointing out that there are other perspectives out here.
Sure, in the sense it is Turing complete; but that doesn’t make it the most efficient approach for all cases. For example I’m not going to use it to decide the answer to the statement “2 + 3”, it is much more efficient for me to use the arithmetic abstraction.
Agreed, it is one of the reasons that I am actively working on epistemology.
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as “good” or “bad” based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to—which i think is all of them, but that doesn’t matter to universality).
Not in the sense that it’s Turing complete so you could, by a roundabout way and using whatever methods, do anything.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you’re trying to get rigor from math. But Popper saved philosophy. (And most people didn’t notice.) Example:
You have very limited ambitious. You’re trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won’t answer them, it’s hopeless.
Perhaps I don’t understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean… If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in “justificationist epistemologies”… i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk—Popper didn’t fix this and neither will Bayesianism, it is more of a people problem—but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper’s approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like “what is moral objectively?” is easy. Nothing is “moral objectively”. Meaning is created within contexts of assessment; if you want to know if something is “moral” you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of “moral”.
Minor nitpick at least capable of modeling any Turing machine, not Turing complete. For example, something that had access to some form of halting oracle would be able to do more than a Turing machine.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Saying your epistemology has a “necessary flaw” is an admission of defeat, that it doesn’t work. The “necessary flaw” is unavoidable if you are committed to the justificationist way of thinking. Popper saw that the whole idea of justification is wrong and he offered a different idea to replace it—an idea with no known flaws. You criticize Popper for being underspecified, yet he elaborated on his ideas in many books. And, furthermore, no amount of mathematical precision or formalism will paper over cracks in justificationist epistemologies.
In this case, its recognition of reality. I repeat that I would like to defer this conversation until we have something concrete to disagree about. Until then I don’t care about that difference.
The “necessary flaw” arises because all justificationist epistemologies lead to infinite regress or circular arguments or appeals to authority (or even sillier things). That you think there is no alternative to justificationism and I don’t is something concrete we disagree about.
Adding a reference for this comment: Münchhausen Trilemma.
It’s interesting how different Bayesians say different things. They don’t seem to all agree with each other even about their basic claims. Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven’t sorted out these internal disputes.
There are disputes among the Bayesians. But you are confusing different issues. First, the presence of internal disputes about the borders of an idea is not a priori a problem with an idea that is in progress. The fact that evolutionary biologists disagree about how much neutral drift matters isn’t a reason to reject evolution. (It is possible that I’m reading an unintended implication here.)
Moreover, most of what you are talking about here are not contradictions but failure to understand. That Bayesianism has flaws is a distinct claim from when someone talks about something like Cox’s theorem which is the sort of result that Bayesians are talking about that you refer to as “Sometimes Bayesianism is proved”(which incidentally is a terribly unhelpful and vague way of discussing the point). The point of results like Cox’s theorem is that if one very broad attempts under certain very weak assumptions to formalize epistemology you must end up with some form of Bayesianism. At the same time it is important to keep in mind that this isn’t saying all that much. It doesn’t for example say anything about what one’s priors should be. Thus one has the classical disagreement between objective and subjective Bayesians based on what sort of priors to use (and within each of those there is further breakdown. LessWrong seems to mainly have objective Bayesians favoring some form Occam prior, although just what is not clear.) Similarly, when discussing whether or not Bayesianism is compatible with Popper depends a lot on what one means by “Bayesianism”, “compatible” and “Popper”. Bayesianism is certainly not compatible with a naive-Popperian approach, which is what many are talking about when they say that it is not compatible (and as you’ve already noted Popper himself wasn’t a naive Popperian). But some people use Popper to mean the idea that given an interesting hypothesis one should search out for experiments which would be likely to falsify the hypothesis if it is false (an idea that actually predates Popper) but what one means by falsify can be a problem.
Why don’t you fix the WP article?
Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone’s particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected.
In particular, if you ever try to make a claim of the form “You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief” then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision.
Edit: changed wording to be less of an ass.
In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too?
Given that I do make claims on my website, I wonder why you don’t pick one and point out something you think is wrong with it.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.
If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments—its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don’t think this is necessarily a flaw with your website—presumably it was not designed first and foremost as a response to Bayesianism—but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.
To be clear, what I am looking for is a statement of the form: “Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage.” Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been “wrong” where Popper would be clearly “right” at any historical point would be good enough to argue about.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren’t because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn’t mean we agree or have nothing to discuss.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn’t solve your problem, whereas my epistemology did solve mine.
It doesn’t take Popperian epistemology to learn social fluency. I’ve learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
That’s a claim that only makes sense in certain epistemological systems...
I don’t have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Hmm? I’m not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That’s not to say I understand how they think it is implied/ok in a Popperian framework).
I have considered that. Popperian epistemology helps with these issues more. I don’t want to argue about that now because it is an advanced topic and you don’t know enough about my epistemology to understand it (correct me if I’m wrong), but I thought the example could help make a point to the person I was speaking to.
If I don’t understand your explanation and am interested in it, I’m prepared to do the research in order to understand it, but if you can only assert why your epistemology should result in better social learning and not demonstrate that it does so for people in general, I confess that I will probably not be interested enough to follow up.
I will note though, that stating the assumption that another does not understand, but leaving them free to correct you, strikes me as a markedly worse way to minimize conflict and aggression than asking if they have the familiarity necessary to understand the explanation.
You could begin by reading
http://fallibleideas.com/emotions
And the rest of the site. If you don’t understand any connections between it and Popperian epistemology, feel free to ask.
I’m not asking you to be interested in this, but I do think you should have some interest in rival epistemologies.
I studied philosophy as part of a double major (which I eventually dropped because of the amount of confusion and sophistry I was being expected to humor,) and my acquaintance with Popper, although not as deep as yours, I’m sure, precedes my acquaintance with Bayes. Although it may be that others who I have not read better presented and refined his ideas, Popper’s philosophy did not particularly impress me, whereas the ideas presented by Bayesianism immediately struck me as deserving of further investigation. It’s possible that I haven’t given Popper his fair shakes, but it’s not for lack of interest in other epistemologies that I’ve come to identify as Bayesian.
I wouldn’t describe the link as unhelpful, exactly, but I also wouldn’t say that it’s among the best advice for controlling one’s emotions that I’ve received (this was a process I put quite a bit of effort into learning, and I’ve received a fair amount,) so I don’t see how it functions as a demonstration of the superiority of Popperian epistemology.
You say Popper didn’t impress you. Why not? Did you have any criticism of his ideas? Any substantive argument against them?
Do you have any criticism of the linked ideas? You just said it doesn’t seem that good to you, but you didn’t give any kind of substantive argument.
With regards to the link, it’s simply that it’s less in depth than other advice I’ve received. There are techniques that it doesn’t cover in meaningful detail, like manipulation of cognitive dissonance (habitually behaving in certain ways to convince yourself to feel certain ways,) or recognition of various cognitive biases which will alter our feelings. It’s not that bad as an introduction, but it could do a better job opening up connections to specific techniques to practice or biases to be aware of.
Popper didn’t impress me because it simply wasn’t apparent to me that he was establishing any meaningful improvements to how we go about reasoning and gaining information. Critical rationalism appeared to me to be a way of looking at how we go about the pursuit of knowledge, but to quote Feynman, “Philosophy of science is about as useful to scientists as ornithology is to birds.” It wasn’t apparent to me that trying to become more Popperian should improve the work of scientists at all; indeed, in practice it is my observation that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence are more likely to make significant blunders.
Attempting to become more Bayesian in one’s epistemology, on the other hand, had immediately apparent benefits with regards to conducting science well (which are are discussed extensively on this site.)
I had criticisms of Popper’s arguments to offer, and could probably refresh my memory of them by revisiting his writings, but the deciding factor which kept me from bothering to read further was that, like other philosophers of science I had encountered, it simply wasn’t apparent that he had anything useful to offer, whereas it was immediately clear that Bayesianism did.
Feynman meant normal philosophers of science. Including, I think, Bayesians. He didn’t mean Popper, who he read and appreciated. Feynman himself engaged in philosophy of science, and published it. It’s academic philosophers, of the dominant type, that he loathed.
That’s not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn’t actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what “support” means (like in the phrase “supporting evidence”) and tell me how support differs from consistency.
The biggest thing Popper has to offer is the solution to justificationism which has plagued almost everyone’s thinking since Aristotle. You won’t know quite what that is because it’s an unconscious bias for most people. In short it is the idea that theories should be supported/justified/verified/proven, or whatever, whether probabilistically or not. A fraction of this is: he solved the problem of induction. Genuinely solved it, rather than simply giving up and accepting regress/foundations/circularly/whatever.
I’ve read his arguments for this, I simply wasn’t convinced that accepting it in any way improved scientific conduct.
“Support” would be data in light of which the subjective likelihood of a hypothesis is increased. If consistency does not meaningfully differ from this with respect to how we respond to data, can you explain why it is is more practical to think about data in terms of consistency than support?
I’d also like to add that I do know what justificationism is, and your tendency to openly assume deficiencies in the knowledge of others is rather irritating. I normally wouldn’t bother to remark upon it, but given that you posed a superior grasp of socially effective debate conduct as evidence of the strength of your epistemology, I feel the need to point out that I don’t feel like you’re meeting the standards of etiquette I would expect of most members of Less Wrong.
Yet again you disagree with no substantive argument. If you don’t have anything to say, why are you posting?
Well, consistency is good as far as it goes. If we see 10 white swans, we should reject “all swans are black” (yes, even this much depends on some other stuff). Consistency does the job without anything extraneous or misleading.
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn’t true, except in special cases which aren’t important.
The way Popper improves on this is by noting that there are always many hypotheses consistent with the data. Saying their likelihood increases is pointless. It does not help deal with the problem of differentiating between them. Something else, not support, is needed. This leaves the concept of support with nothing useful to do, except be badly abused in sloppy arguments (I have in mind arguments I’ve seen elsewhere. Lots of them. What people do is they find some evidence, and some theory it is consistent with, and they say the theory is supported so now they have a strong argument or whatever. And they are totally selective about this. You try to tell them, “well, theory is also consistent with the data. so it’s supported just as much. right?” and they say no, theirs fits the data better, so it’s supported more. but you ask what the difference is, and they can’t tell you because there is no answer. the idea that a theory can fit the data better than another, when both are consistent with the data, is a mistake (again there are some special cases that don’t matter in practice).)
Suppose I ask a woman if she has children. She says no.
This is supporting evidence for the hypothesis that she does not have children; it raises the likelihood from my perspective that she is childless.
It is entirely consistent with the hypothesis that she has children; she would simply have to be lying.
So it appears to me that in this case, whatever arguments you might make regarding induction, viewing the data in terms of consistency does not inform my behavior as well.
This is the standard story. It is nothing but an appeal to intuition (and/or unstated background knowledge, unstated explanations, unstated assumptions, etc). There is no argument for it and there never has been one.
Refuting this common mistake is something important Popper did.
Try reading your post again. You simply assumed that her having children is more likely. That is not true from the example presented, without some unstated assumptions being added. There is no argument in your post. That makes it very difficult to argue against because there’s nothing to engage with.
It could go either way. You know it could go either way. You claim one way fits the data better, but you don’t offer any rigorous guidelines (or anything else) for figuring out which way fits better. What are the rules to decide which consistent theories are more supported than others?
Of course it could go either way. But if I behaved in everyday life as if it were equally likely to go either way, I would be subjecting myself to disaster. For practical purposes it has always served me better to accept that certain hypotheses that are consistent with the available data are more probable than others, and while I cannot prove that this makes it more likely that it will continue to do so in the future, I’m willing to bet quite heavily that it will.
If Popper’s epistemology does not lead to superior results to induction, and at best, only reduces to procedures that perform as well, then I do not see why I should regard his refutation of induction as important.
Support is the same thing as more consistent with that hypothesis than with the alternatives (P(E|H) >P(E|~H)).
What is “more consistent”?
Consistent = does not contradict. But you can’t not-contradict more. It’s a boolean issue.
Then you have your answer: Support is non-boolean. I don’t think a boolean concept of consistency of observations with anything makes sense, though (consistent would mean P(E|H)>0, but observations never have a probability of 0 anyway, so every observation would be consistent with everything, or you’d need an arbitrary cut-off. P(observe black sheep|all sheep are white) > 0, but is very small ).
Some theories predict that some things won’t happen (0 probability). I consider this kind of theory important.
You say I have my answer, but you have not answered. I don’t think you’ve understood the problem. To try to repeat myself less, check out the discussion here, currently at the bottom:
http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3urr?context=3
But they don’t predict that you won’t hallucinate, or misread the experimental data, or whatever. Some things not happening doesn’t mean some things won’t be observed.
You asked how support differed form consistent. Boolean vs real number is a difference. Even if you arbitrarily decide that real numbers are not allowed and only booleans are that doesn’t mean that differentiating between their use of real numbers and your use of booleans is inconsistent on part of those who use real numbers.
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.
Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can’t listen to an mp3 right now.
My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren’t the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn’t? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?
I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don’t yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian’s willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
e.g. by pointing out that whether we do or don’t survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can’t make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There’s no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. “yes”. Actually, they make many conjectures, e.g. also “no”. Then they criticize the conjectures, and make more conjectures. So for example I would criticize “yes” for not providing enough explanatory detail about why it’s a good idea. Thus “yes” would be rejected, but a variant of it like “yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites” would be better. If I didn’t understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the “no” answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
I think it’s wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes’ theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don’t want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it’s not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you’ve described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don’t actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That’s fine, but then it’s not clear that you’ve done anything different from what a Bayesian would have done—you’ve simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
But some are criticized and some aren’t.
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here—the English language is not well adapted to expressing these ideas. (In particular, the concept “uncriticized” is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won’t have those mistakes, we learn and improve our knowledge.
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can’t reasonably guarantee that I will not have later objections as well before we’ve even had the discussion!
So let me see if I’m understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we—I don’t want to say “assume that it’s true,” because that’s probably not correct—we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I’m not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, “in order to make a decision, we need to have a guiding theory which is currently impervious to criticism” (my current understanding of Popper’s idea, roughly illustrated), isn’t this just another way of saying: “the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?”
In short, isn’t imperviousness to criticism a type of justification in itself?
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
That is the general idea (but incomplete).
The reason we behave as if it’s true is that it’s the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn’t want to act on an idea that we (thought we) saw a mistake in, over one we don’t think we see any mistake with—we should use what (fallible) knowledge we have.
A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general).
One reason that not being criticized isn’t a justification is that saying it is gets you a regress problem. So let’s not say that! The other reason is: what would that be adding as compared with not saying it? It’s not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those).
Terminology isn’t terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don’t like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it “justified” it doesn’t matter so much.
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it’s not clear that this discussion warrants an entirely new topic.
Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by “justificationist” systems.
It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper’s system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper’s system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate.
According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question—but as you can see, the regress has begun.
I think it’s a big topic. Began answering your question here:
http://lesswrong.com/r/discussion/lw/551/popperian_decision_making/
No regress has begun. I already answered why:
Try to regress me.
It is possible, if you want, to create a regress of some kind which isn’t the same one and isn’t important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won’t regard it as a real regress problem of the same type. You’ll probably wonder how that’s evaluated, but, well, it’s not such a big deal. We’ll quickly get to the point where your attempts to create regress look silly to you. That’s different than the regresses inductivists face where it’s the person trying to defend induction who runs out of stuff to say.
You’re equivocating between “knowing exactly the contents of the new knowledge”, which may be impossible for the reason you describe, and “know some things about the effect of the new knowledge”, which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.
That’s because to a Bayesian, these things are the same thing. Epistemology is all about probability—and visa versa. Bayes’s theorem includes induction and confirmation. You can’t accept Bayes’s theorem and reject induction without crazy inconsistency—and Bayes’s theorem is just the math of probability theory.
If I understand correctly, I think curi is saying that there’s no reason for probability and epistemology to be the same thing. That said, I don’t entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these “epistemological problems” that Popper solves but Bayes doesn’t?