You mean, the potential actions are discrete but the potential outcomes for those actions are continuous, with a probability measure over those outcomes, or that there is a non-discrete set of possible actions, or something else?
Yes, potential actions are discrete and outcomes are arbitrarily distributed.
I’m not sure I’m understanding this correctly. Are you asking how the St. Petersburg Paradox works?
No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.
Before you take the derivative with respect to Delta, apply the desired utility function, and then take the derivative.
Interesting. I should try this later.
(Note that linear utility functions behave the same as logarithmic utility functions, and Wikipedia’s treatment assumes a linear utility function, not a logarithmic one.)
The Kelly criterion is the natural result when assuming a logarithmic utility function. For a linear utility function it arises if the actor maximizes expected growth rate.
Yes, potential actions are discrete and outcomes are arbitrarily distributed.
It seems like this paper or this paper might be relevant to your interests. (PM me your email if you don’t have access to them.)
No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.
Kelly tells you how much risk you should be willing to take for a particular b; integrating over b is not meaningful, since it’s integrating over multiple bets. (Note that f is E/b, if E is the expected value, and 1/x diverges. Since p is capped by 1, then E is capped by b, and the maximum risk you should take is betting everything, if p=1 i.e. it’s a sure thing.)
If you put a probability p(b) on any particular payout, you might get something meaningful out of integrating p(b)E/b, but it’s not clear to me that’s the right way to do things.
Interesting. I should try this later.
It won’t work out very prettily, but it is instructive. Basically, that tells you how much your bet should have differed from Delta, given what happened. You can then figure out what would have been optimal for that sequence, then do a weighted sum over sequences. (If your utility function isn’t scale invariant, and only log is, then you need information on how long the game runs; if you’re allowed to change the fraction of your wealth that you put up each time, then it’s an entirely different problem.)
Yes, potential actions are discrete and outcomes are arbitrarily distributed.
No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.
Interesting. I should try this later.
The Kelly criterion is the natural result when assuming a logarithmic utility function. For a linear utility function it arises if the actor maximizes expected growth rate.
It seems like this paper or this paper might be relevant to your interests. (PM me your email if you don’t have access to them.)
Kelly tells you how much risk you should be willing to take for a particular b; integrating over b is not meaningful, since it’s integrating over multiple bets. (Note that f is E/b, if E is the expected value, and 1/x diverges. Since p is capped by 1, then E is capped by b, and the maximum risk you should take is betting everything, if p=1 i.e. it’s a sure thing.)
If you put a probability p(b) on any particular payout, you might get something meaningful out of integrating p(b)E/b, but it’s not clear to me that’s the right way to do things.
It won’t work out very prettily, but it is instructive. Basically, that tells you how much your bet should have differed from Delta, given what happened. You can then figure out what would have been optimal for that sequence, then do a weighted sum over sequences. (If your utility function isn’t scale invariant, and only log is, then you need information on how long the game runs; if you’re allowed to change the fraction of your wealth that you put up each time, then it’s an entirely different problem.)