I’m confused. Could someone help?

Imagine that I’m offering a bet that costs 1 dollar to accept. The prize is X + 5 dollars, and the odds of winning are 1 in X. Accepting this bet, therefore, has an expected value of 5 dollars a positive expected value, and offering it has an expected value of −5 dollars. It seems like a good idea to accept the bet, and a bad idea for me to offer it, for any reasonably sized value of X.

Does this still hold for unreasonably sized values of X? Specifically, what if I make X really, really, big? If X is big enough, I can reasonably assume that, basically, nobody’s ever going to win. I could offer a bet with odds of 1 in 10100 once every second until the Sun goes out, and still expect, with near certainty, that I’ll never have to make good on my promise to pay. So I can offer the bet without caring about its negative expected value, and take free money from all the expected value maximizers out there.

What’s wrong with this picture?

See also: Taleb Distribution, Nick Bostrom’s version of Pascal’s Mugging

(Now, in the real world, I obviously don’t have 10100 +5 dollars to cover my end of the bet, but does that really matter?)


Edit: I should have actually done the math. :(