I suggest you look up the concept of “subjective Bayesian”. Probabilities are states of knowledge. If you don’t know an answer, it’s uncertain. If someone who doesn’t know anything you don’t can look over your odds and construct a knowably losing bet anyway, or construct a winning bet that you refuse, then you are Dutch-bookable.
Also, considering that you have apparently been reading this site for years and you still have not grasped the concept of subjective uncertainty, and you are still working with a frequentist notion of probability, nor yet have you even grasped the difference, I would suggest to you in all seriousness that you seek enlightenment elsewhere.
(Sorry, people, there’s got to be some point at which I can express that. Downvote if you must, I suppose, if you think such a concept is unspeakable or unallowed, but it’s not a judgment based on only one case of incomprehension, of course.)
I don’t know the background conflict. But at least one of taw’s points is correct. Any prior P, of any agent, has at least one of the following three properties:
It is not defined on all X—i.e. P is agnostic about some things
It has P(X) < P(X and Y) for at least one pair X and Y—i.e. P sometimes falls for the conjunction fallacy
It has P(X) = 1 for all mathematically provable statements X—i.e. P is an oracle.
You aren’t excused from having to pick one by rejecting frequentist theory.
To make use of a theory like probability one doesn’t have to have completely secure foundations. But it’s responsible to know what the foundational issues are. If you make a particularly dicey or weird application of probability theory, e.g. game theory with superintelligences, you should be prepared to explain (especially to yourself!) why you don’t expect those foundational issues to interfere with your argument.
About taw’s point in particular, I guess it’s possible that von Neumann gave a completely satisfactory solution when he was a teenager, or whatever, and that I’m just showcasing my ignorance. (I would dearly like to hear about this solution!) Otherwise your comment reads like you’re shooting the messenger.
Logical uncertainty is an open problem, of course (I attended a workshop on it once, and was surprised at how little progress has been made).
But so far as Dutch-booking goes, the obvious way out is 2 with the caveat that the probability distribution never has P(X) < (PX&Y) at the same time, i.e., you ask it P(X), it gives you an answer, you ask it P(X&Y), it gives you a higher answer, you ask it P(X) again and it now gives you an even higher answer from having updated its logical uncertainty upon seeing the new thought Y.
It is also clear from the above that taw does not comprehend the notion of subjective uncertainty, hence the notion of logical uncertainty.
How, exactly, to deal with logical uncertainty is an unsolved problem, no?
It’s not clear why it’s more of a problem for Bayesian than anyone else.
The post you linked to is a response to Eliezer arguing with Hanson, with Eliezer taking a pro-his-own-contrarianism stance. Of course he’s aware of that contrarianism.
How, exactly, to deal with logical uncertainty is an unsolved problem, no?
Your choice is either accepting that you will be sometimes inconsistent,
or accepting that you will sometimes answer “I don’t know” without providing a specific number, or both.
There’s nothing wrong with “I don’t know”.
It’s not clear why it’s more of a problem for Bayesian than anyone else.
For Perfect Bayesian or for Subjective Bayesian?
Subjective Bayesian does believe many statements of kind P(simple math step) = 1, P(X|conjunction of simple math steps) = 1, and yet P(X) < 1.
it does not believe math statements with probability 1 or 0 until it investigates them. As soon as it investigates whether (X|conjunction of simple math steps) is true and determines the answer, it sets P(X)=1.
The problem with “I don’t know” is that sometimes you have to make decisions. How do you propose to make decisions if you don’t know some relevant mathematical fact X?
For example, if you’re considering some kind of computer security system that is intended to last a long time, you really need an estimate for how likely it is that P=NP.
The problem with “I don’t know” is that sometimes you have to make decisions. How do you propose to make decisions if you don’t know some relevant mathematical fact X?
Then you need to fully accept that you will be inconsistent sometimes. And compartmentalize your belief system accordingly, or otherwise find a way to deal with these inconsistencies.
Well, they can wriggle out of this by denying P(simple math step) = 1
Doesn’t this imply you’d be willing to accept P(2+2=5) on good enough odds?
This might be pragmatically a reasonable thing to do, but if you accept that all math might be broken, you’ve already given up any hope of consistency.
I suggest you look up the concept of “subjective Bayesian”. Probabilities are states of knowledge. If you don’t know an answer, it’s uncertain. If someone who doesn’t know anything you don’t can look over your odds and construct a knowably losing bet anyway, or construct a winning bet that you refuse, then you are Dutch-bookable.
Also, considering that you have apparently been reading this site for years and you still have not grasped the concept of subjective uncertainty, and you are still working with a frequentist notion of probability, nor yet have you even grasped the difference, I would suggest to you in all seriousness that you seek enlightenment elsewhere.
(Sorry, people, there’s got to be some point at which I can express that. Downvote if you must, I suppose, if you think such a concept is unspeakable or unallowed, but it’s not a judgment based on only one case of incomprehension, of course.)
I don’t know the background conflict. But at least one of taw’s points is correct. Any prior P, of any agent, has at least one of the following three properties:
It is not defined on all X—i.e. P is agnostic about some things
It has P(X) < P(X and Y) for at least one pair X and Y—i.e. P sometimes falls for the conjunction fallacy
It has P(X) = 1 for all mathematically provable statements X—i.e. P is an oracle.
You aren’t excused from having to pick one by rejecting frequentist theory.
To make use of a theory like probability one doesn’t have to have completely secure foundations. But it’s responsible to know what the foundational issues are. If you make a particularly dicey or weird application of probability theory, e.g. game theory with superintelligences, you should be prepared to explain (especially to yourself!) why you don’t expect those foundational issues to interfere with your argument.
About taw’s point in particular, I guess it’s possible that von Neumann gave a completely satisfactory solution when he was a teenager, or whatever, and that I’m just showcasing my ignorance. (I would dearly like to hear about this solution!) Otherwise your comment reads like you’re shooting the messenger.
Logical uncertainty is an open problem, of course (I attended a workshop on it once, and was surprised at how little progress has been made).
But so far as Dutch-booking goes, the obvious way out is 2 with the caveat that the probability distribution never has P(X) < (PX&Y) at the same time, i.e., you ask it P(X), it gives you an answer, you ask it P(X&Y), it gives you a higher answer, you ask it P(X) again and it now gives you an even higher answer from having updated its logical uncertainty upon seeing the new thought Y.
It is also clear from the above that taw does not comprehend the notion of subjective uncertainty, hence the notion of logical uncertainty.
Have any ideas?
Your full endorsement of evaporative cooling is quite disturbing.
Are you at least aware that epistemic position you’re promoting is highly contrarian?
How, exactly, to deal with logical uncertainty is an unsolved problem, no?
It’s not clear why it’s more of a problem for Bayesian than anyone else.
The post you linked to is a response to Eliezer arguing with Hanson, with Eliezer taking a pro-his-own-contrarianism stance. Of course he’s aware of that contrarianism.
Your choice is either accepting that you will be sometimes inconsistent, or accepting that you will sometimes answer “I don’t know” without providing a specific number, or both.
There’s nothing wrong with “I don’t know”.
For Perfect Bayesian or for Subjective Bayesian?
Subjective Bayesian does believe many statements of kind P(simple math step) = 1, P(X|conjunction of simple math steps) = 1, and yet P(X) < 1.
it does not believe math statements with probability 1 or 0 until it investigates them. As soon as it investigates whether (X|conjunction of simple math steps) is true and determines the answer, it sets P(X)=1.
The problem with “I don’t know” is that sometimes you have to make decisions. How do you propose to make decisions if you don’t know some relevant mathematical fact X?
For example, if you’re considering some kind of computer security system that is intended to last a long time, you really need an estimate for how likely it is that P=NP.
Then you need to fully accept that you will be inconsistent sometimes. And compartmentalize your belief system accordingly, or otherwise find a way to deal with these inconsistencies.
Well, they can wriggle out of this by denying P(simple math step) = 1, which is why I introduced this variation.
Doesn’t this imply you’d be willing to accept P(2+2=5) on good enough odds?
This might be pragmatically a reasonable thing to do, but if you accept that all math might be broken, you’ve already given up any hope of consistency.