1° If we are going to build an artificial mind that reason with Bayesian probability, we should be able to ask it the probability of any sentence, independently from the fact that it must act on that sentence or not. Think, for example, of an oracular AI. For this reason, I think that denying the concept of Knightian uncertainty on the basis of a decision theoretic criterion is misguided: we, as an ideal, should be able to assign to any sentence some kind of number illustrating our degree of belief. As an ideal meaning that, building a concrete finite robot, we might improve its efficiency by cutting unnecessary calculation. What we are doing here though is talking about Knightian uncertainty in principle. I think the problem has been already solved quite nicely by the notion of Ap distribution (Jaynes, chapter 18). Knightian uncertainty about A is simply the uncertainty we get when we have a smooth Ap distribution.
2° In the bets proposed about the coin toss, you not only have 2 bets, you have a third bet surreptitiously used by Sir Percy: 1 – pay 0.5 and receive 1.1 on head 2 – pay 0.5 and receive 1.1 on tail 3 – one AND two, that is: pay 1 and receive 1.1
Now, if on the event H we have a uniform Ap distribution between .4 and .6, it is possible to show that the probability of A is .5. Thus the expected return: 1 e 2 - (0.5 1.1 + 0.5 0) – 0.5 = 0.05 3 – 1.1*1 − 0.5 = 0.1
It is clear that even from a simple expected utility maximization perspective, taking both bets is better. Knightian uncertainty is not involved at all.
3° Three games of tennis, these are very clearly distinct from a Bayesian point of view, using Ap distributions:
the balanced game: a sharp Ap distribution centered at 0.5
the mysterious game: a uniform Ap distribution;
the unbalanced game: a Jeffrey Ap distribution.
If your bets only involve A, then surely all these Ap have first momentum 0.5, so a Bayesian reasoner has no preference. But if the bets involve the Ap’s, then surely a Bayesian has very good reason to distinguish between them. Indeed, people tend to bet on what they have much more information on, that is where the Ap distribution is sharper, because it’s stabler under further evidence. A very rational behavior indeed.
1°
If we are going to build an artificial mind that reason with Bayesian probability, we should be able to ask it the probability of any sentence, independently from the fact that it must act on that sentence or not. Think, for example, of an oracular AI.
For this reason, I think that denying the concept of Knightian uncertainty on the basis of a decision theoretic criterion is misguided: we, as an ideal, should be able to assign to any sentence some kind of number illustrating our degree of belief. As an ideal meaning that, building a concrete finite robot, we might improve its efficiency by cutting unnecessary calculation. What we are doing here though is talking about Knightian uncertainty in principle.
I think the problem has been already solved quite nicely by the notion of Ap distribution (Jaynes, chapter 18). Knightian uncertainty about A is simply the uncertainty we get when we have a smooth Ap distribution.
2°
In the bets proposed about the coin toss, you not only have 2 bets, you have a third bet surreptitiously used by Sir Percy:
1 – pay 0.5 and receive 1.1 on head
2 – pay 0.5 and receive 1.1 on tail
3 – one AND two, that is: pay 1 and receive 1.1
Now, if on the event H we have a uniform Ap distribution between .4 and .6, it is possible to show that the probability of A is .5.
Thus the expected return:
1 e 2 - (0.5 1.1 + 0.5 0) – 0.5 = 0.05
3 – 1.1*1 − 0.5 = 0.1
It is clear that even from a simple expected utility maximization perspective, taking both bets is better. Knightian uncertainty is not involved at all.
3°
Three games of tennis, these are very clearly distinct from a Bayesian point of view, using Ap distributions:
the balanced game: a sharp Ap distribution centered at 0.5
the mysterious game: a uniform Ap distribution;
the unbalanced game: a Jeffrey Ap distribution.
If your bets only involve A, then surely all these Ap have first momentum 0.5, so a Bayesian reasoner has no preference. But if the bets involve the Ap’s, then surely a Bayesian has very good reason to distinguish between them. Indeed, people tend to bet on what they have much more information on, that is where the Ap distribution is sharper, because it’s stabler under further evidence.
A very rational behavior indeed.