Distributed public goods provision

Link post

Most people benefit significantly from privately funded public goods (e.g. Wikipedia).

If we all contribute to such public goods, then we can all end up better off. But as an individual it’s almost never a good return on investment. I think of supporting such public goods as being a good citizen, but that leaves open the question what is a good amount to contribute? I can make most decisions by balancing costs and benefits, but I think that basically never leads to making small contributions to public goods in the name of being a “good citizen.”

This post doesn’t aim to answer that full question, but it sets up one simple formal model for the situation and a very natural “wish list” for public goods funding norms. It then proposes the following norm that meets this wish list:

My contribution to public good X should be 100 times larger than the amount I’d personally benefit if public good X received 1% more funding.

For example, suppose I’m considering how much to contribute to Wikipedia. Maybe I get $1000/​year of value from Wikipedia, and I’d get $1000.10 of value if Wikipedia had 1% more funding. This rule says I should contribute $10/​year (= 10 cents x 100).

(In practice I think that labor is often a more important contribution than money, but a similar principle can apply. I’d also prefer that most donors to public goods use something like a donor lottery so that they can make a more informed decision about a larger amount of money when they do decide to give. This post is trying to bite off a tiny bit of the conceptual problem.)

Formal model

Assume there are n people and k projects. Person i makes a contribution x_ij to public good j. Let y_j = sum x_ij be the total funding for public good j.

Each person has a utility function U_i(y_1, …, y_k) expressing how much they benefit if the public goods are funded at levels y_1, …, y_k. The total utility for person i is U_i(y_1, …, y_k) – sum_j x_ij.

For simplicity I’ll assume that everyone’s utility is monotone in public goods: I’m never unhappy when one of the projects receives more funding.

I’ll also assume that the utility functions U are convex: increases in funding only ever decrease the marginal value of more funding. Convexity is often realistic and it makes it much easier to prove theorems or do computations, so it seems like a good starting point. Relaxing it seems important but not something I’m going to get into here.

I’m also assuming that everyone has linear utility in money which is only realistic if the amounts involved are very small. When utility is linear it is natural to measure utility in $ and compare it freely across people. Relaxing this assumption (along with the associated normative assumptions) is a whole can of worms that’s also important but I’m also not going to get into here.

In this setting there is a single “optimal” funding for public goods, namely the allocation that maximizes the convex aggregate utility function U_1(y_1, …, y_k) + … + U_n(y_1, …, y_k) – (y_1 +… + y_k). This leaves unspecified who provides the funding.

Wishlist

By “norm” I mean a rule that individuals can use for deciding how much to fund each public good.

Here are two plausible desiderata for a norm:

  • If everyone always follows the norm, then we end up with the optimal levels of funding for the public goods.

  • If you start with a community that follows the norm and add a bunch of new people who behave manipulatively, they can never make the original community worse off.

These are not the conventional desiderata in mechanism design—they say nothing at all about incentives.

All these desiderata say is that if a bunch of people choose to follow this norm, then they will do relatively well (even if many people try to manipulate their generosity). You can probably do even better by defecting. Following the norm in this post would be a kind of act of generosity. It’s also not verifiable so it can’t be enforced in any straightforward way.

Overall it’s not at all clear that you want a norm with these properties (since they can be in tension with other desirable properties like incentive compatibility), but I do think there’s a plausible case to use such norms as a kind of “cooperative default.”

Participating in this kind of cooperative norm involves being generous along an orthogonal axis to utilitarianism. Ten utilitarians with big enough irreconcilable differences would find themselves in exactly the same bind as 10 selfish people unwilling to fund public goods that would benefit the group.

The norm

I’ll first describe a centralized implementation of funding decisions since it’s easier to analyze. In the next section i’ll describe centralizing it.

Everyone publishes their utility function U_i. Choose the socially optimal level of funding y_j given those utility functions. Person j contributes y_j * dU_i/​dy_j to public good j. In light of the following lemma, this is equivalent to the norm quoted at the top of the post:

Lemma [efficiency]: This norm balances the budget, i.e. y_j is in fact equal to the sum of x_ij.

Proof: at optimal funding levels, the sum of DU_i/​dy_j must be equal to 1 for every j, otherwise we could improve utility by either increasing or decreasing the level of funding. So sum x_ij = sum y_j * dU_i/​dy_j = y_j * sum dU_i/​dy_j = y_j * 1 = y_j.

Lemma [robustness]: adding new people never harms an honest community who follows this norm.

Proof: let x_ij and y_j be the funding levels before adding new people and x_ij*, y_j* be the levels after adding people.

Adding new dishonest people doesn’t change the utility functions of any of the honest people, it just adds new (monotone) terms U_i to the sum that we are maximizing. This results in increasing the marginal value of every public good, which results in increasing the level of funding y_j* for each public good, which results (by convexity) in decreasing every derivative dU_i/​dy_j. I’ll write dU_i/​dy_j* for the derivative of U evaluated at y* and dU_i/​dy_j for the derivative evaluated at y.

We’ll show that the increase in utility U_i(y*) – U_i(y) is always greater than the increase in person i’s payment:

  • By convexity, U_i(y*) – U_i(y) is at least sum_j (y_j* – y_j) dU_i/​dy_j*

  • Since derivatives are decreasing and y_j >= 0, this is at least sum_j ( y_j* dU_i /​ dy_j*) – (y_j dU_i /​ dy_j)

  • By definition, this is equal to sum_j x_ij* – x_ij, i.e. the total amount that we increased person i’s payment.

Relaxing assumptions

Centralization. To implement this norm need to find the unique allocation that satisfies x_ij = y_j * dU_i/​dy_j. We could do this in an iterative process where everyone iteratively adjusts their contributions x_ij until we get to equilibrium, basically implementing a distributed convex optimization algorithm—we don’t actually need to have anyone ever write down their utility function or have any communication beyond calculating the combined funding for each public good y_j = sum_i x_ij.

Linear utility. In reality my value for money increases as I spend more of it. I think the basic approach in this post still works, and the iterative algorithm is basically unchanged as long as you just convert between $ and utility using your current preferences at each step. The guarantee becomes more complex because it’s no longer so simple to talk about a social optimum. The best we can say is that we’ll get something Pareto efficient (even including the possibility of transfers).

Convexity. If utility is non-convex then an iterative approach won’t work and you certainly need to have some kind of coordination to get to an efficient allocation (you also need to solve a potentially intractable optimization problem). I believe the norm in this post may also become vulnerable to manipulation but I haven’t checked for an example. It’s plausible to me that my two desiderata are unachievable when utility is non-convex. (They are obviously unachievable when utility is non-monotone.)

Uniqueness? It seems like everything had to work out surprisingly nicely for this norm to achieve the two desiderata. I’m not sure whether there are many other norms that would work or if this one is essentially unique.