Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper’s epistemology? Someone should have done it, right?
Most things in the space of possible documents can’t be refuted, because they don’t correspond to anything refutable. They are simply confused, and irredeemably.
In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don’t have to spend time thinking about it, because it is solved. I would not generally criticize a rival’s ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.
Are you so sure of yourself—that you are right about many things—that you will dismiss all rival ideas without even having to know what they say?
Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn’t object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted—whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
Really this entire discussion comes down to what we want out of epistemology.
That [guiding betting] is not primarily what we want.
What do you want? I don’t understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:
I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.
Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?
You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper’s ideas aren’t nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.
Most things in the space of possible documents can’t be refuted, because they don’t correspond to anything refutable. They are simply confused, and irredeemably.
You don’t think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).
But you haven’t provided any argument that Popper in particular was confused, irrefutable, or whatever. I don’t know about you, but as someone who wants to improve my epistemological knowledge I think it’s important to consider all the major ideas in the field at the very least enough to know one good criticism of each.
Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you’re done with thinking, you have the final truth, and that’s that..?
The only sort of argument which warrants response is an objection to my current definitive answer.
Popper published several of those. Where’s the response from Bayesians?
One thing to note is it’s hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted—whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested
Popper solved that problem.
I am fairly confident the problem has no solution for what are essentially obvious reasons
The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called “justificationism” by Popperians, and are criticized in detail. I think you ought to be interested in this.
One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn’t exist automatically but is being created by your own assumptions.
Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
What are you talking about? You haven’t read his books and claim he didn’t give enough detail? He was something of a workaholic who didn’t watch TV, didn’t have a big social life, and worked and wrote all the time.
What do you want?
To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I’m not assuming you actually are a positivist, but I’m not really seeing much difference yet.)
To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world.
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don’t think Bayesianism addresses this well.
Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don’t think Bayesianism addresses this well.
Given well defined contexts and meanings for good and bad I don’t see why Bayesianism could not be effectively applied to to moral problems.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
You’ve repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you’ve shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let’s work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Consider that you are replying to a statement I just said that all you’ve done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don’t.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Sorry. I have no idea who is who. Don’t mind me.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
The Popperian method is universal.
if Bayesianism is in some sense Turing complete then it can be used to do all of this
Well, umm, yes but that’s no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don’t know how to make it do that stuff. Epistemology should help us.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
No problem, I’m just pointing out that there are other perspectives out here.
The Popperian method is universal.
Sure, in the sense it is Turing complete; but that doesn’t make it the most efficient approach for all cases. For example I’m not going to use it to decide the answer to the statement “2 + 3”, it is much more efficient for me to use the arithmetic abstraction.
But we don’t know how to make it do that stuff. Epistemology should help us.
Agreed, it is one of the reasons that I am actively working on epistemology.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
Example or details?
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as “good” or “bad” based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to—which i think is all of them, but that doesn’t matter to universality).
Not in the sense that it’s Turing complete so you could, by a roundabout way and using whatever methods, do anything.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you’re trying to get rigor from math. But Popper saved philosophy. (And most people didn’t notice.) Example:
With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
You have very limited ambitious. You’re trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won’t answer them, it’s hopeless.
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to—which i think is all of them, but that doesn’t matter to universality).
Perhaps I don’t understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean… If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in “justificationist epistemologies”… i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you’re trying to get rigor from math. But Popper saved philosophy. (And most people didn’t notice.) Example:
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk—Popper didn’t fix this and neither will Bayesianism, it is more of a people problem—but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper’s approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
You have very limited ambitious. You’re trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won’t answer them, it’s hopeless.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like “what is moral objectively?” is easy. Nothing is “moral objectively”. Meaning is created within contexts of assessment; if you want to know if something is “moral” you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of “moral”.
But to respond to what I think you mean… If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems;
Minor nitpick at least capable of modeling any Turing machine, not Turing complete. For example, something that had access to some form of halting oracle would be able to do more than a Turing machine.
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don’t.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
Saying your epistemology has a “necessary flaw” is an admission of defeat, that it doesn’t work. The “necessary flaw” is unavoidable if you are committed to the justificationist way of thinking. Popper saw that the whole idea of justification is wrong and he offered a different idea to replace it—an idea with no known flaws. You criticize Popper for being underspecified, yet he elaborated on his ideas in many books. And, furthermore, no amount of mathematical precision or formalism will paper over cracks in justificationist epistemologies.
Saying your epistemology has a “necessary flaw” is an admission of defeat,
In this case, its recognition of reality. I repeat that I would like to defer this conversation until we have something concrete to disagree about. Until then I don’t care about that difference.
The “necessary flaw” arises because all justificationist epistemologies lead to infinite regress or circular arguments or appeals to authority (or even sillier things). That you think there is no alternative to justificationism and I don’t is something concrete we disagree about.
It’s interesting how different Bayesians say different things. They don’t seem to all agree with each other even about their basic claims. Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven’t sorted out these internal disputes.
Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven’t sorted out these internal disputes.
There are disputes among the Bayesians. But you are confusing different issues. First, the presence of internal disputes about the borders of an idea is not a priori a problem with an idea that is in progress. The fact that evolutionary biologists disagree about how much neutral drift matters isn’t a reason to reject evolution. (It is possible that I’m reading an unintended implication here.)
Moreover, most of what you are talking about here are not contradictions but failure to understand. That Bayesianism has flaws is a distinct claim from when someone talks about something like Cox’s theorem which is the sort of result that Bayesians are talking about that you refer to as “Sometimes Bayesianism is proved”(which incidentally is a terribly unhelpful and vague way of discussing the point). The point of results like Cox’s theorem is that if one very broad attempts under certain very weak assumptions to formalize epistemology you must end up with some form of Bayesianism. At the same time it is important to keep in mind that this isn’t saying all that much. It doesn’t for example say anything about what one’s priors should be. Thus one has the classical disagreement between objective and subjective Bayesians based on what sort of priors to use (and within each of those there is further breakdown. LessWrong seems to mainly have objective Bayesians favoring some form Occam prior, although just what is not clear.) Similarly, when discussing whether or not Bayesianism is compatible with Popper depends a lot on what one means by “Bayesianism”, “compatible” and “Popper”. Bayesianism is certainly not compatible with a naive-Popperian approach, which is what many are talking about when they say that it is not compatible (and as you’ve already noted Popper himself wasn’t a naive Popperian). But some people use Popper to mean the idea that given an interesting hypothesis one should search out for experiments which would be likely to falsify the hypothesis if it is false (an idea that actually predates Popper) but what one means by falsify can be a problem.
Most things in the space of possible documents can’t be refuted, because they don’t correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don’t have to spend time thinking about it, because it is solved. I would not generally criticize a rival’s ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.
Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn’t object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted—whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
Really this entire discussion comes down to what we want out of epistemology.
What do you want? I don’t understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:
I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.
Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?
You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper’s ideas aren’t nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore “avoids” this difficulty?
If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.
You don’t think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).
But you haven’t provided any argument that Popper in particular was confused, irrefutable, or whatever. I don’t know about you, but as someone who wants to improve my epistemological knowledge I think it’s important to consider all the major ideas in the field at the very least enough to know one good criticism of each.
Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you’re done with thinking, you have the final truth, and that’s that..?
Popper published several of those. Where’s the response from Bayesians?
One thing to note is it’s hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)
Popper solved that problem.
The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called “justificationism” by Popperians, and are criticized in detail. I think you ought to be interested in this.
One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn’t exist automatically but is being created by your own assumptions.
What are you talking about? You haven’t read his books and claim he didn’t give enough detail? He was something of a workaholic who didn’t watch TV, didn’t have a big social life, and worked and wrote all the time.
To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I’m not assuming you actually are a positivist, but I’m not really seeing much difference yet.)
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don’t think Bayesianism addresses this well.
Neither. You can and should do better!
Given well defined contexts and meanings for good and bad I don’t see why Bayesianism could not be effectively applied to to moral problems.
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don’t.
You can’t create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can’t evaluate).
You’ve repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you’ve shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let’s work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Consider that you are replying to a statement I just said that all you’ve done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Sorry. I have no idea who is who. Don’t mind me.
The Popperian method is universal.
Well, umm, yes but that’s no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don’t know how to make it do that stuff. Epistemology should help us.
Example or details?
No problem, I’m just pointing out that there are other perspectives out here.
Sure, in the sense it is Turing complete; but that doesn’t make it the most efficient approach for all cases. For example I’m not going to use it to decide the answer to the statement “2 + 3”, it is much more efficient for me to use the arithmetic abstraction.
Agreed, it is one of the reasons that I am actively working on epistemology.
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as “good” or “bad” based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like “Why is X moral in the UK but not in Saudi Arabia?”
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to—which i think is all of them, but that doesn’t matter to universality).
Not in the sense that it’s Turing complete so you could, by a roundabout way and using whatever methods, do anything.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you’re trying to get rigor from math. But Popper saved philosophy. (And most people didn’t notice.) Example:
You have very limited ambitious. You’re trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won’t answer them, it’s hopeless.
Perhaps I don’t understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean… If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in “justificationist epistemologies”… i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk—Popper didn’t fix this and neither will Bayesianism, it is more of a people problem—but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper’s approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like “what is moral objectively?” is easy. Nothing is “moral objectively”. Meaning is created within contexts of assessment; if you want to know if something is “moral” you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of “moral”.
Minor nitpick at least capable of modeling any Turing machine, not Turing complete. For example, something that had access to some form of halting oracle would be able to do more than a Turing machine.
First of all, you shouldn’t lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes’ theorem is an abstraction. If you don’t have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn’t use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn’t mean that Bayes’ theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Saying your epistemology has a “necessary flaw” is an admission of defeat, that it doesn’t work. The “necessary flaw” is unavoidable if you are committed to the justificationist way of thinking. Popper saw that the whole idea of justification is wrong and he offered a different idea to replace it—an idea with no known flaws. You criticize Popper for being underspecified, yet he elaborated on his ideas in many books. And, furthermore, no amount of mathematical precision or formalism will paper over cracks in justificationist epistemologies.
In this case, its recognition of reality. I repeat that I would like to defer this conversation until we have something concrete to disagree about. Until then I don’t care about that difference.
The “necessary flaw” arises because all justificationist epistemologies lead to infinite regress or circular arguments or appeals to authority (or even sillier things). That you think there is no alternative to justificationism and I don’t is something concrete we disagree about.
Adding a reference for this comment: Münchhausen Trilemma.
It’s interesting how different Bayesians say different things. They don’t seem to all agree with each other even about their basic claims. Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven’t sorted out these internal disputes.
There are disputes among the Bayesians. But you are confusing different issues. First, the presence of internal disputes about the borders of an idea is not a priori a problem with an idea that is in progress. The fact that evolutionary biologists disagree about how much neutral drift matters isn’t a reason to reject evolution. (It is possible that I’m reading an unintended implication here.)
Moreover, most of what you are talking about here are not contradictions but failure to understand. That Bayesianism has flaws is a distinct claim from when someone talks about something like Cox’s theorem which is the sort of result that Bayesians are talking about that you refer to as “Sometimes Bayesianism is proved”(which incidentally is a terribly unhelpful and vague way of discussing the point). The point of results like Cox’s theorem is that if one very broad attempts under certain very weak assumptions to formalize epistemology you must end up with some form of Bayesianism. At the same time it is important to keep in mind that this isn’t saying all that much. It doesn’t for example say anything about what one’s priors should be. Thus one has the classical disagreement between objective and subjective Bayesians based on what sort of priors to use (and within each of those there is further breakdown. LessWrong seems to mainly have objective Bayesians favoring some form Occam prior, although just what is not clear.) Similarly, when discussing whether or not Bayesianism is compatible with Popper depends a lot on what one means by “Bayesianism”, “compatible” and “Popper”. Bayesianism is certainly not compatible with a naive-Popperian approach, which is what many are talking about when they say that it is not compatible (and as you’ve already noted Popper himself wasn’t a naive Popperian). But some people use Popper to mean the idea that given an interesting hypothesis one should search out for experiments which would be likely to falsify the hypothesis if it is false (an idea that actually predates Popper) but what one means by falsify can be a problem.