First of all, the formula for r_i in the decaying-exponential case is wrong.
Thus, the resources will be evenly distributed among all the classes as r increases. This is bad, because the resource fraction for the central class C1 goes to 0 as we increase the number of classes.
I don’t think this makes sense, for much the same reasons as given by skeptical_lurker.
You only get the even-distribution conclusion by (something like) fixing the number of classes as you let the total resources go to infinity. (Otherwise, the terms involving log(i) can make a large contribution.) But in that situation, your utility goes exponentially fast towards its upper bound of 1 and it’s hard to see how that can be viewed as a bad outcome.
You might say it’s a suboptimal outcome even though it’s a good one, but to make that claim it seems to me you have to do an actual expected-utility calculation. And we know what that expected-utility calculation says: it says that the resource allocation you’re objecting to is, in fact, the optimal one.
Or you might say it’s a suboptimal outcome because you just know that this allocation is bad, or something. Which amounts to saying that actually you know what the utility function should be and it isn’t the one the analysis assumes.
I have some sympathy with that last option. A utility function that not only is bounded but converges exponentially fast towards its bound feels pretty counterintuitive. It’s not a big surprise, surely, if such a counterintuitive choice of utility function yields wrong-looking resource allocations?
If both n and r get large, under what circumstances is it still true that the resource allocation is approximately uniform? I suppose that depends on how you define “approximately uniform” but let’s try looking at the ratio of r1 to r/n. If my scribbling is correct, this equals frac{r_i}{r/n}=1alphafrac{sumlogi}{r}. When n is large this is (very crudely) of order n/r log n. So for any reasonably definition of “approximately uniform” this requires that r be growing at least proportionally to n. Eg., for the ratio to be much below log(n) we require r >= alpha n. And the expected utility is, if some more of my scribbling is correct, 1 - exp(-r/n)/n.G/A where G,A are the geometric and arithmetic means of (i^-alpha). So, e.g., the expected utility is at least 1-exp(-r/n)/n unconditionally (so at least 1-exp(-alpha)/n if r >= alpha n), and if alpha is much bigger than 0 (so as to make it in any way unreasonable for the ratio to be far from uniform) then the expected utility is bigger because GM/AM is smaller.
Let’s take a very concrete example. Take the Zipfian alpha=1, and when n=100. So our probabilities for the classes are roughly 19%, 10%, 6%, 5%, …, 0.2%. If we take r = alpha n = 100 then our resource allocation is 4.64, 3.94, 3.54, …, 0.03. Perhaps that’s “approximately uniform” but it certainly doesn’t look shockingly so. The expected utilities conditional on the various classes are 0.99, 0.98, 0.97, …, 0.032. The overall expected utility is 0.81.
That doesn’t seem to me like an unreasonable or counterintuitive outcome, all things considered.
First of all, the formula for r_i in the decaying-exponential case is wrong.
I don’t think this makes sense, for much the same reasons as given by skeptical_lurker.
You only get the even-distribution conclusion by (something like) fixing the number of classes as you let the total resources go to infinity. (Otherwise, the terms involving log(i) can make a large contribution.) But in that situation, your utility goes exponentially fast towards its upper bound of 1 and it’s hard to see how that can be viewed as a bad outcome.
You might say it’s a suboptimal outcome even though it’s a good one, but to make that claim it seems to me you have to do an actual expected-utility calculation. And we know what that expected-utility calculation says: it says that the resource allocation you’re objecting to is, in fact, the optimal one.
Or you might say it’s a suboptimal outcome because you just know that this allocation is bad, or something. Which amounts to saying that actually you know what the utility function should be and it isn’t the one the analysis assumes.
I have some sympathy with that last option. A utility function that not only is bounded but converges exponentially fast towards its bound feels pretty counterintuitive. It’s not a big surprise, surely, if such a counterintuitive choice of utility function yields wrong-looking resource allocations?
If both n and r get large, under what circumstances is it still true that the resource allocation is approximately uniform? I suppose that depends on how you define “approximately uniform” but let’s try looking at the ratio of r1 to r/n. If my scribbling is correct, this equals frac{r_i}{r/n}=1 alphafrac{sumlogi}{r}. When n is large this is (very crudely) of order n/r log n. So for any reasonably definition of “approximately uniform” this requires that r be growing at least proportionally to n. Eg., for the ratio to be much below log(n) we require r >= alpha n. And the expected utility is, if some more of my scribbling is correct, 1 - exp(-r/n)/n.G/A where G,A are the geometric and arithmetic means of (i^-alpha). So, e.g., the expected utility is at least 1-exp(-r/n)/n unconditionally (so at least 1-exp(-alpha)/n if r >= alpha n), and if alpha is much bigger than 0 (so as to make it in any way unreasonable for the ratio to be far from uniform) then the expected utility is bigger because GM/AM is smaller.
Let’s take a very concrete example. Take the Zipfian alpha=1, and when n=100. So our probabilities for the classes are roughly 19%, 10%, 6%, 5%, …, 0.2%. If we take r = alpha n = 100 then our resource allocation is 4.64, 3.94, 3.54, …, 0.03. Perhaps that’s “approximately uniform” but it certainly doesn’t look shockingly so. The expected utilities conditional on the various classes are 0.99, 0.98, 0.97, …, 0.032. The overall expected utility is 0.81.
That doesn’t seem to me like an unreasonable or counterintuitive outcome, all things considered.