No, just that it doesn’t manifest itself in the form of a pyramid of probabilities of probabilities being “correct”. There certainly is the problem of priors, and the justification for reasoning that way in the first place (which were sketched by others in the other thread).
Yeah, you’re making a flawed argument by analogy. “There’s an infinite regress in deductive logic, so therefore any attempt at justification using probability will also lead to an infinite regress.” The reason that probabilistic justification doesn’t run into this (or at least, not the exact analogous thing) is that “being wrong” is a definite state with known properties, that is taken into account when you make your estimate. This is very unlike deductive logic.
Agent beliefs don’t normally regress to before they were conceived. They get assigned some priors around when they are born—usually by an evolutionary process.
Are you saying that there is no regress problem? Yudkowsky disagrees. And so do other commenters here, one of whom called it a “necessary flaw”.
No, just that it doesn’t manifest itself in the form of a pyramid of probabilities of probabilities being “correct”. There certainly is the problem of priors, and the justification for reasoning that way in the first place (which were sketched by others in the other thread).
Yeah, you’re making a flawed argument by analogy. “There’s an infinite regress in deductive logic, so therefore any attempt at justification using probability will also lead to an infinite regress.” The reason that probabilistic justification doesn’t run into this (or at least, not the exact analogous thing) is that “being wrong” is a definite state with known properties, that is taken into account when you make your estimate. This is very unlike deductive logic.
That essay seems pretty yuck to me.
Agent beliefs don’t normally regress to before they were conceived. They get assigned some priors around when they are born—usually by an evolutionary process.
I’m not clear on what you are saying.