Karma: 2,385
• If a job requires in-person customer/​client contact or has a conservative dress code, long hair is a negative for men. I can’t think of a job where long hair might be a plus aside from music, arts, or modeling. It’s probably neutral for Bay area programmers assuming it’s well maintained. If you’re inclined towards long hair since it seems low effort, it’s easy to buy clippers and keep it cut to a uniform short length yourself.

Beards are mostly neutral—even where long hair would be negative—again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.

• From the Even Odds thread:

Assume there are n people. Let S_i be person i’s score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be

(i.e. the person’s score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.

This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quadratic scoring rule), then each person expects the same profit before the question is resolved.

• Not aware of any tourneys with this tweak, but I use a similar example when I teach.

If the payoff from exiting is zero and the mutual defection payoff is negative, then the game doesn’t change much. Exit on the first round becomes the unique subgame-perfect equilibrium of any finite repetition, and with a random end date, trigger strategies to support cooperation work similarly to the original game.

Life is a more interesting if the mutual defection payoff is sufficiently better than exit. Cooperation can happen in equilibrium even when the end date is known (except on the last round) since exit is a viable threat to punish defection.

• From an economics perspective, the stapler dissertation is real. The majority of the time, the three papers haven’t been published.

It’s also possible to publish empirical work produced in a few months. The issue is where that article is likely to be published. There’s a clear hierarchy of journals, and a low ranked publication could hurt more than it helps. Dissertation committees have very different standards depending on the student’s ambition to go into academia. If the committee has to write letters of rec to other professors, it takes a lot more work to be sufficiently novel and interesting. If someone goes into industry, almost any three papers will suffice.

I’ve seen people leave because they couldn’t pass coursework or because they felt burnt out, but the degree almost always comes conditional on writing something and having well-calibrated ambitions.

• Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.

Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from “equal incomes”.

What benefits do you think a different system might provide, or what problems does monetary exchange have that you’re trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.

• 10 Dec 2014 19:35 UTC
11 points

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story’s economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I’m feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they’ll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it’d feel like a great gift even if there are auctions and payments behind the scenes.

For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.

• I’m on board with “absurdly powerful”. It underlies the bulk of mechanism design, to the point my advisor complains we’ve confused it with the entirety of mechanism design.

The principle gives us the entire set of possible outcomes for some solution concept like dominant-strategy equilibrium or Bayes-Nash equilibrium. It works for any search over the set of outcomes, whether that leads to an impossibility result or a constructive result like identifying the revenue-optimal auction.

Given an arbitrary mechanism, it’s easy (in principle) to find the associated IC direct mechanism(s). The mechanism defines a game, so we solve the game and find the equilibrium outcomes for each type profile. Once we’ve found that, the IC direct mechanism just assigns the equilibrium outcome directly. For instance, if everyone’s equilibrium strategy in a pay-your-bid/​first-price auction was to bid 90% of their value, the direct mechanism assigns the item to the person with the highest value and charges them 90% of their value. Since a game can have multiple equilibria, we have one IC mechanism per outcome. The revelation principle can’t answer questions like “Is there a mechanism where every equilibrium (as opposed to some equilibrium) gives a particular outcome?”

• The paper cited is handwavy and conversational because it isn’t making original claims. It’s providing a survey for non-specialists. The table I mentioned is a summary of six other papers.

Some of the studies assume workers in poorer countries are permanently 1/​3rd or 1/​5th as productive as native workers, so the estimate is based on something more like a person transferred from a $5,000 GDP/​capita economy to a$50,000 GDP/​capita economy is able to produce \$10-15K in value.

• For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)

• The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power /​ votes /​ delegates.

On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + y^2 ≤ 1; x,y ≥ 0}, then the NBS starting from (0.9, 0.1) is about (0.95, 0.28) and the NBS starting from (0,0) with theory 1 having nine delegates (i.e. an exponent of nine in the Nash product) and theory 2 having one delegate is (0.98, 0.16).

If the credence-weighted mixture were on the Pareto frontier, both approaches are equivalent.

• For the NBS with more than two agents, you just maximize the product of everyone’s gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.

Agents could be given more bargaining power by giving them different exponents in the Nash product.

• Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.

Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person’s life (the outcome can’t be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).

• I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we’re doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.

Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between

• choosing C (say if C is 99% as good as the ideal for each agent),

• a 5050 lottery over A and B (if C is only 1% better than the worst for each), or

• some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/​3rds on C and 1/​3rd on A gives them each 60% of the gain between their best and worst)

• My reading of the problem is that a satisfactory Parliamentary Model should:

• Represent moral theories as delegates with preferences over adopted policies.

• Allow delegates to stand-up for their theories and bargain over the final outcome, extracting concessions on vital points while letting others policies slide.

• Restrict delegates’ use of dirty tricks or deceit.

Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of starting with the Nash bargaining solution as implemented by an alternating offer game?

The two obvious issues are how to translate delegate’s preferences into utilities and what the disagreement point is. Assuming a utility function is fairly mild if the delegate has preferences over lotteries. Plus,there’s no utility comparison problem even though you need cardinal utilities. The lack of a natural disagreement point is trickier. What intuitions might be lost going this route?

• It turns out the only Pareto efficient, individually rational (ie everyone never gets something worse than their initial job), and strategyproof mechanism is Top Trading Cycles. In order to make Cato better off, we’d have to violate one of those in some way.

# Strat­e­gyproof Mechanisms: Possibilities

2 Jun 2014 2:26 UTC
42 points