and I don’t think this converges as the number of steps n grows. If I’m not getting myself mixed up, then what does converge is 1nlogT(Un), but… I can do the same with “expected money following the bet-everything strategy”.
I think your justification for Kelly is pragmatically sufficient, but theoretically leaves me a bit cold. I’m interested in knowing why Kelly is the right choice here
So I feel like any justification eventually has to boil down in either pragmatics (“here’s something we care about”) or pretend-pragmatics (“here’s something we’re pretending to care about for the purposes of this hypothetical; presumably we think there’s some correspondence to the real world but we may not specify exactly what we think it is”). If we don’t have something like that, why pick one theoretical justification over another?
And I don’t feel like my justification is lacking in theory. It’s not that I’ve done a bunch of experiments and said “this seems to satisfy my pragmatic desires but I don’t know why”. I have a theoretical argument for why it satisfies my pragmatic desires.
I don’t follow, sorry.
What statistic is this? If I calculate the time-average for one step, using the Kelly strategy, I get roughly 1.02:
T(U1)=1.20.6⋅0.80.4≈1.02
If I calculate it for two steps, if I’ve done it right, I get roughly 1.04:
T(U2)=(1.2⋅1.2)0.6⋅0.6⋅(1.2⋅0.8)0.6⋅0.4⋅(0.8⋅1.2)0.4⋅0.6⋅(0.8⋅0.8)0.4⋅0.4≈1.04
and I don’t think this converges as the number of steps n grows. If I’m not getting myself mixed up, then what does converge is 1nlogT(Un), but… I can do the same with “expected money following the bet-everything strategy”.
E(U1)=2⋅0.6+0⋅0.4=1.2E(U2)=(2⋅2)⋅(0.6⋅0.6)+(2⋅0)⋅(0.6⋅0.4)+...=1.44log(E(Un))=nlog(1.2)
So I feel like any justification eventually has to boil down in either pragmatics (“here’s something we care about”) or pretend-pragmatics (“here’s something we’re pretending to care about for the purposes of this hypothetical; presumably we think there’s some correspondence to the real world but we may not specify exactly what we think it is”). If we don’t have something like that, why pick one theoretical justification over another?
And I don’t feel like my justification is lacking in theory. It’s not that I’ve done a bunch of experiments and said “this seems to satisfy my pragmatic desires but I don’t know why”. I have a theoretical argument for why it satisfies my pragmatic desires.