I have the following question, the answer to which may be obvious but I have difficulty to understand: “expected utility” in a game is already multiplication of expected prize on its probability. Why we multiply it on the probability again?
Abram is multiplying the conditional expected utility of an event by the probability of that event. For example, the utility of a lottery ticket conditional on winning the lottery could be a million dollars, and we multiply that by the probability of winning the lottery. The result is “probutility” of an event. Taking the union of disjoint events is linear in both probabilities and probutilities, so we can think of them as coordinates of a vector.
I still have a feeling that he is using “expected utility” term differently than it is used in other places where it is already presented as (utility)x(probability), like here: https://wiki.lesswrong.com/wiki/Expected_utility
E.g.: In your example: utility of a winning ticket = 1 million USD
I was confused about this too, but now I think I have some idea of what’s going on.
Normally probability is defined for events, but expected value is defined for random variables, not events. What is happening in this post is that we are taking the expected value of events, by way of the conditional expected value of the random variable (conditioning on the event). In symbols, if A⊂Ω is some event in our sample space, we are saying E(A)=E(X∣A), where X:Ω→R is some random variable (this random variable is supposed to be clear from the context, so it doesn’t appear on the left hand side of the equation).
Going back to cousin_it’s lottery example, we can formalize this as follows. The sample space can be Ω={win,lose} and the probability measure is defined as P({win})=1/106 and P({lose})=1−1/106. The random variable L:Ω→R represents the lottery, and it is defined by L(win)=106 and L(lose)=0.
Now we can calculate. The expected value of the lottery is:
So in this case, the “probutility” of winning is the same as the expected value of the lottery. However, this is only the case because the situation is so simple. In particular, if L(lose) was not equal to zero (while winning and losing remained exclusive events), then the two would have been different (the expected value of the lottery would have changed while the “probutility” would have remained the same).
What is happening in this post is that we are taking the expected value of events, by way of the conditional expected value of the random variable (conditioning on the event).
...and I was enlightened. Assuming this is correct (it fits with how I read this post and a couple others), this seems like a much better way to explain what’s going on with probutility.
So what is the difference between probutility and “expected utility”? is it just another name for well-known idea? (The comment was edited as at first I read “probutility” as “probability” in your comment.)
I have the following question, the answer to which may be obvious but I have difficulty to understand: “expected utility” in a game is already multiplication of expected prize on its probability. Why we multiply it on the probability again?
Abram is multiplying the conditional expected utility of an event by the probability of that event. For example, the utility of a lottery ticket conditional on winning the lottery could be a million dollars, and we multiply that by the probability of winning the lottery. The result is “probutility” of an event. Taking the union of disjoint events is linear in both probabilities and probutilities, so we can think of them as coordinates of a vector.
I still have a feeling that he is using “expected utility” term differently than it is used in other places where it is already presented as (utility)x(probability), like here: https://wiki.lesswrong.com/wiki/Expected_utility
E.g.: In your example: utility of a winning ticket = 1 million USD
Probability of winning: one millionth
Expected utility of a ticket = 1 USD.
Probutility = ???
I was confused about this too, but now I think I have some idea of what’s going on.
Normally probability is defined for events, but expected value is defined for random variables, not events. What is happening in this post is that we are taking the expected value of events, by way of the conditional expected value of the random variable (conditioning on the event). In symbols, if A⊂Ω is some event in our sample space, we are saying E(A)=E(X∣A), where X:Ω→R is some random variable (this random variable is supposed to be clear from the context, so it doesn’t appear on the left hand side of the equation).
Going back to cousin_it’s lottery example, we can formalize this as follows. The sample space can be Ω={win,lose} and the probability measure is defined as P({win})=1/106 and P({lose})=1−1/106. The random variable L:Ω→R represents the lottery, and it is defined by L(win)=106 and L(lose)=0.
Now we can calculate. The expected value of the lottery is:
The expected value of winning is:
The “probutility” of winning is:
So in this case, the “probutility” of winning is the same as the expected value of the lottery. However, this is only the case because the situation is so simple. In particular, if L(lose) was not equal to zero (while winning and losing remained exclusive events), then the two would have been different (the expected value of the lottery would have changed while the “probutility” would have remained the same).
...and I was enlightened. Assuming this is correct (it fits with how I read this post and a couple others), this seems like a much better way to explain what’s going on with probutility.
Probutility of winning = 1 USD
So what is the difference between probutility and “expected utility”? is it just another name for well-known idea? (The comment was edited as at first I read “probutility” as “probability” in your comment.)