Bayesianism in the face of unknowns

Suppose I tell you I have an infinite supply of unfair coins. I pick one randomly and flip it, recording the result. I’ve done this a total of 100 times and they all came out heads. I will pay you $1000 if the next throw is heads, and $10 if it’s tails. Each unfair coin is entirely normal, whose “heads” follow a binomial distribution with an unknown p. This is all you know. How much would you pay to enter this game?

I suppose another way to phrase this question is “what is your best estimate of your expected winnings?”, or, more generally, “how do you choose the maximum price you’ll pay to play this game?”

Observe that the only fact you know about the distribution from which I’m drawing my coins is those 100 outcomes. Importantly, you don’t know the distribution of each coin’s p in my supply of unfair coins. Can you reasonably assume a specific distribution to make your calculation, and claim that it results in a better best estimate than any other distribution?

Most importantly, can one actually produce a “theoretically sound” expectation here? I.e. one that is calibrated so that if you pay your expected winnings every time and we perform this experiment lots of times then your average winnings will be zero—assuming I’m using the same source of unfair coins each time.

I suspect that the best one can do here is produce a range of values with confidence intervals. So you’re 80% confident that the price you should pay to break even in the repeated game is between A80 and B80, 95% confident it’s between A95 and B95, etc.

If this is really the best obtainable result, then what is a bayesianist to do with such a result to make their decision? Do you pick a price randomly from a specially crafted distribution, which is 95% likely to produce a value between A95..B95, etc? Or is there a more “bayesian” way?