The last formula in this post, the conservation of expected evidence, had a mistake which I’ve only just now fixed.
Since I guess it’s not obvious even to me, I’ll put a reminder for myself here, which may not be useful to others.
Really I’m just “translating” from the “law of iterated expectations” I learned in my stats theory class, which was:
E[E[X|Y]]=E[X]
This is using a notation which is pretty standard for defining conditional expectations.
To define it you can first consider the expected value given a particular value of the random variable Y.
Think of that as a function of that particular value:
f(y)=E[X|Y=y]
Then we define conditional expectation as a random variable, obtained from plugging in the random value of Y:
E[X|Y]=f(Y)
The problem with this notation is it gets confusing which capital letters are random variables and which are propositions, so I’ve bolded random variables.
But it makes it very easy to state the law of iterated expectations.
The law of iterated expectations also holds when “relativized”. That is,
E[E[X|Y]|B]=E[X|B]
where B is an event. If we wanted to stick to just putting random variables behind the conditional bar we could have used the indicator function of that event.
And this translates to the statement in my post.
X is an indicator for the event H, which makes a conditional expectation of it a conditional probability of H.
So E[X|Y] is Θ.
Our proposition B is the background information B, I used the same symbol there.
And the right hand side is another expectation of an indicator and therefore also a probability.
I really didn’t want to define this notation in the post itself, but it’s how I’m trained to think of this stuff, so for my own confidence in the final formula I had to write it out this way.
The last formula in this post, the conservation of expected evidence, had a mistake which I’ve only just now fixed. Since I guess it’s not obvious even to me, I’ll put a reminder for myself here, which may not be useful to others. Really I’m just “translating” from the “law of iterated expectations” I learned in my stats theory class, which was:
E[E[X|Y]]=E[X]
This is using a notation which is pretty standard for defining conditional expectations. To define it you can first consider the expected value given a particular value of the random variable Y. Think of that as a function of that particular value: f(y)=E[X|Y=y] Then we define conditional expectation as a random variable, obtained from plugging in the random value of Y: E[X|Y]=f(Y) The problem with this notation is it gets confusing which capital letters are random variables and which are propositions, so I’ve bolded random variables. But it makes it very easy to state the law of iterated expectations.
The law of iterated expectations also holds when “relativized”. That is, E[E[X|Y]|B]=E[X|B] where B is an event. If we wanted to stick to just putting random variables behind the conditional bar we could have used the indicator function of that event.
And this translates to the statement in my post. X is an indicator for the event H, which makes a conditional expectation of it a conditional probability of H. So E[X|Y] is Θ. Our proposition B is the background information B, I used the same symbol there. And the right hand side is another expectation of an indicator and therefore also a probability.
I really didn’t want to define this notation in the post itself, but it’s how I’m trained to think of this stuff, so for my own confidence in the final formula I had to write it out this way.