# Scott Garrabrant comments on A Proper Scoring Rule for Confidence Intervals

• You are cor­rect. It doesn’t work for more than two an­swers. I knew that when I thought about this be­fore, but for­got. Cor­rected above.

I dont have a nice al­gorithm for N an­swers. I tried a bunch of the ob­vi­ous sim­ple things, and they dont work.

• I think an al­gorithm for N out­comes is: spin twice, gain 1 ev­ery time you get the an­swer right but lose 1 if both guesses are the same.

One can “see in­tu­itively” why it works: when we in­crease the spin­ner-prob­a­bil­ity of out­come i by a small delta (imag­in­ing that all other prob­a­bil­ities stay fixed, and not wor­ry­ing about the fact that our sum of prob­a­bil­ities is now 1 + delta) then the spin­ner-prob­a­bil­ity of get­ting the same out­come twice goes up by 2 x delta x p[i]. How­ever, on each spin we get the right an­swer delta x q[i] more of the time, where q[i] is the true prob­a­bil­ity of out­come i. Since we’re spin­ning twice we get the right an­swer 2 x delta x q[i] more of­ten. Th­ese can­cel out if and only if p[i] = q[i]. [Ob­vi­ously some work would need to be done to turn that into a proof...]

• Just to be clear: if you spin twice and both come up right, you’re gain­ing 2 and then los­ing 1? (I.e., this is equiv­a­lent to what you wrote in an ear­lier ver­sion of the com­ment?)

• That’s right.