I wrote this with the assumption that Bob would care about maximizing his money at the end, and that there would be a high but not infinite number of rounds.
On my view, your questions mostly don’t change the analysis much. The only difference I can see is that if he literally only cares about beating Alice, he should go all in. In that case, having $1 less than Alice is equivalent to having $0. That’s not really how people use money though, and seems pretty artificial.
How are you expecting these answers to change things?
I think you give short shrift to Ole Peters ideas here. His argument is similar to the one maximizing repeated bets, but it holds together a lot better. I particularly like his explanation in his paper about the St. Petersburg problem.
You say that “We can’t time-average our profits [...] So we look at the ratio of our money from one round to the next.” But that’s not what Peters does! He looks at maximizing total wealth, in the limit as time goes to infinity.
In particular, we want to maximize U=limT→∞U0∗∏Tt=0Rt where U is wealth after all the bets and Rt is 1 plus the percent-increase from bet t. The unique correct thing to maximize is wealth after all your bets.
You want to know what choice to make for any given decision, so you want to maximize your rate of return for each individual bet, which is (∏Tt=0Rt)1T. Peters does a few variable substitutions in the limit as T→∞ to get Rt as a function of probabilities for outcomes of the bets (see the paper), and finds (∏Tt=0Rt)1T=∏nrpnn, where rn is the gain from one possible outcome of the bet and pn is the probability of that outcome.
Then you just choose how much to bet to maximize ∏nrpnn. The argmax of a product is the same as the argmax of sum of the logs, so choosing to maximize this time average will lead to the same observed behavior as choosing to maximize for log utility in ensemble averages (because logrpnn=pnlogrn).