Question on math in “A Technical Explanation of Technical Explanation”

“A Technical Explanation of Technical Explanation” (http://​​yudkowsky.net/​​rational/​​technical) defines a proper rule for a betting game as one where the payoff is maximized by betting an amount proportional to the probability of success.

The first example rule given is that the payoff is one minus the negative of the squared error, so for example if you make a bet of .3 on the winner, your payoff is 1-(1-.3)^2 = .51.

This doesn’t seem like a good example. It works if there are only two options, but I don’t think it works if there are three or more. For example, we imagine P(red) = .5, P(blue) = .2, P(green) = .3. If we place bets of .5, .2, and .3 respectively, the expected return I get is .6. (edit: Fixed a mistake pointed out by Douglas Knight.)

However, if I place bets of .51, .19, .3 the expected return is .60173. I have that the condition for maximization is

(1-R)P(R) = (1-B)P(B) = (1-G)P(G),

which I got by taking partial derivatives of the expectation and setting them equal. (“R” stands for the bet placed on red and “P(R)” for the probability of red, etc.) This is different than simply R=P(R), etc.

So does the article have a mistake, or do I, or did I miss part of the context?