You’re repeating the same things again. Which means you probably didn’t understand what I said about probability estimates always being wrong. Meanwhile probability distributions can be exactly right, in the sense that they perfectly fit your current knowledge. You should go read a few books or take a class on probability. As a second book I would recommend E.T. Jaynes’ Probability Theory.
As for probability being a part of reality, remember what I said about uncertainty being probabilistic, if you use the axioms that something cannot be more and less likely than something else at the same time, estimates of likeliness should not have large jumps on infinitesimal evidence, and that estimates of likeliness should not ignore information or make it up (okay, fine, I just copied and pasted that from before)? Which of those axioms does Karl Popper reject?
You didn’t understand my point or address it. At all. You just gave up and stopped trying to engage with me. I was still trying. Communication isn’t trivially easy.
Your list of axioms don’t have anything to do with the regress argument I’ve been making, and they aren’t even close to sufficient to support your worldview (they don’t even say that we can ever can or should make a probability estimate about anything).
Because your point is in terms of the truth of specific probabilities, which are already always wrong, your point is ill-formed. T1=0, T2=1, the end. To do better you need to understand probability distributions.
If your first probability estimate is wrong, without any error bar—but simply wrong in an unknown way—then you’re screwed, right?
Edit: And what are you talking about with T2=1? It does not have a probability of 1. That sounds like your “signs flip” thing which I addressed already. I still think you are imagining a different regress than the one I was talking about.
If your first probability estimate is wrong, without any error bar—but simply wrong in an unknown way—then you’re screwed, right?
Think of it this way—if it’s wrong in an utterly unknown way, then the wrongness has perfect symmetry; there’s nothing to distinguish being wrong one way from being wrong in another. By the axiom that you shouldn’t make up information, when the information is symmetric, that part of the distribution (“part” as in you convolve the different parts together to get the total distribution) should be symmetric too. And since the final probability estimate is just the average over your distribution, the symmetry makes the problem easy—or if the problem is poorly defined or poorly understood, it at the very least gives you error bars—it makes the answer somewhere between your current estimate and the maximum entropy estimate.
If you’re wrong in an unknown way, then it could just as well be 1% or 99%.
You might try to claim this averages to 50%. But theories don’t have uniform probability. There are more possible mistakes than truths. Almost all theories are mistaken. So when the probability is unknown, we have every reason to think it’s a mistake (if we’re just going to guess; we could of course use Popper’s epistemology instead which handles all this stuff), and there’s no justification for the theory. Right?
Your comments about error bars are subject to regresses (what is the probability you are right about that method? about the maximum entropy estimate? etc)
You don’t seem to be thinking with the concept of an probability distribution, or an average of one. You say “If you’re wrong in an unknown way, then it could just as well be 1% or 99%” as if it spells doom for any attempt to quantify probabilities. When really all it is is a symmetry property for a probability distribution.
I guess I shouldn’t be expected to give you a class in probability over the internet when you are already convinced it’s all wrong. But again, I think you should read a textbook on this stuff, or take a class.
If that’s what you’re using “the regress” to mean, sure, sign me up. But this has even less bearing than usual on whether uncertainty can be represented by probability, unless you are making the (unlikely and terrible) argument that nothing can be represented by anything.
You’re repeating the same things again. Which means you probably didn’t understand what I said about probability estimates always being wrong. Meanwhile probability distributions can be exactly right, in the sense that they perfectly fit your current knowledge. You should go read a few books or take a class on probability. As a second book I would recommend E.T. Jaynes’ Probability Theory.
As for probability being a part of reality, remember what I said about uncertainty being probabilistic, if you use the axioms that something cannot be more and less likely than something else at the same time, estimates of likeliness should not have large jumps on infinitesimal evidence, and that estimates of likeliness should not ignore information or make it up (okay, fine, I just copied and pasted that from before)? Which of those axioms does Karl Popper reject?
You didn’t understand my point or address it. At all. You just gave up and stopped trying to engage with me. I was still trying. Communication isn’t trivially easy.
Your list of axioms don’t have anything to do with the regress argument I’ve been making, and they aren’t even close to sufficient to support your worldview (they don’t even say that we can ever can or should make a probability estimate about anything).
Because your point is in terms of the truth of specific probabilities, which are already always wrong, your point is ill-formed. T1=0, T2=1, the end. To do better you need to understand probability distributions.
If your first probability estimate is wrong, without any error bar—but simply wrong in an unknown way—then you’re screwed, right?
Edit: And what are you talking about with T2=1? It does not have a probability of 1. That sounds like your “signs flip” thing which I addressed already. I still think you are imagining a different regress than the one I was talking about.
Think of it this way—if it’s wrong in an utterly unknown way, then the wrongness has perfect symmetry; there’s nothing to distinguish being wrong one way from being wrong in another. By the axiom that you shouldn’t make up information, when the information is symmetric, that part of the distribution (“part” as in you convolve the different parts together to get the total distribution) should be symmetric too. And since the final probability estimate is just the average over your distribution, the symmetry makes the problem easy—or if the problem is poorly defined or poorly understood, it at the very least gives you error bars—it makes the answer somewhere between your current estimate and the maximum entropy estimate.
If you’re wrong in an unknown way, then it could just as well be 1% or 99%.
You might try to claim this averages to 50%. But theories don’t have uniform probability. There are more possible mistakes than truths. Almost all theories are mistaken. So when the probability is unknown, we have every reason to think it’s a mistake (if we’re just going to guess; we could of course use Popper’s epistemology instead which handles all this stuff), and there’s no justification for the theory. Right?
Your comments about error bars are subject to regresses (what is the probability you are right about that method? about the maximum entropy estimate? etc)
You don’t seem to be thinking with the concept of an probability distribution, or an average of one. You say “If you’re wrong in an unknown way, then it could just as well be 1% or 99%” as if it spells doom for any attempt to quantify probabilities. When really all it is is a symmetry property for a probability distribution.
I guess I shouldn’t be expected to give you a class in probability over the internet when you are already convinced it’s all wrong. But again, I think you should read a textbook on this stuff, or take a class.
Are you aware that Yudkowsky doesn’t dispute the regress? He has an article on it.
http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/
If that’s what you’re using “the regress” to mean, sure, sign me up. But this has even less bearing than usual on whether uncertainty can be represented by probability, unless you are making the (unlikely and terrible) argument that nothing can be represented by anything.