When you talk about the number of bits of anonymity he has once it’s been narrowed down to Kanto, shouldn’t that be the male population of Kanto?
Edit: The section about comparing mistakes also seems somewhat contradictory; first you talk about the number of people excluded (and so the first bit is, by definition, the most valuable) and then by the number of bits (and so the 11 bit mistake is more important than the 1.6 bit mistake). It may help to resolve the tension between the two approaches more explicitly.
When you talk about the number of bits of anonymity he has once it’s been narrowed down to Kanto, shouldn’t that be the male population of Kanto?
Yes, you’re right—I used the total population of Kanto, not the total male population. I should probably rejigger those numbers.
EDIT: OK, I think I fixed that specific error. Fortunately, the mistake had only contaminated a few numbers… I think. Please tell me if I’ve accidentally introduced additional inconsistencies!
It may help to resolve the tension between the two approaches more explicitly.
I believe I did do this before your comment, in mistake 3 where I discuss what the logarithmic scale buys us.
In general, it should take L about the same amount of work, in a Bayesian sense, to gather one more bit of information regardless of how many he currently has. Thus, quantifying Light’s mistakes in terms of bits conceded is probably the best way to do it.
When you talk about the number of bits of anonymity he has once it’s been narrowed down to Kanto, shouldn’t that be the male population of Kanto?
Edit: The section about comparing mistakes also seems somewhat contradictory; first you talk about the number of people excluded (and so the first bit is, by definition, the most valuable) and then by the number of bits (and so the 11 bit mistake is more important than the 1.6 bit mistake). It may help to resolve the tension between the two approaches more explicitly.
Yes, you’re right—I used the total population of Kanto, not the total male population. I should probably rejigger those numbers.
EDIT: OK, I think I fixed that specific error. Fortunately, the mistake had only contaminated a few numbers… I think. Please tell me if I’ve accidentally introduced additional inconsistencies!
I believe I did do this before your comment, in mistake 3 where I discuss what the logarithmic scale buys us.
In general, it should take L about the same amount of work, in a Bayesian sense, to gather one more bit of information regardless of how many he currently has. Thus, quantifying Light’s mistakes in terms of bits conceded is probably the best way to do it.