Karma: 4,322
Page 1
• If there is a net pos­i­tive ex­ter­nal­ity, then even large pri­vate benefits aren’t enough. That’s the whole point of the ex­ter­nal­ity con­cept.

• If a job re­quires in-per­son cus­tomer/​client con­tact or has a con­ser­va­tive dress code, long hair is a nega­tive for men. I can’t think of a job where long hair might be a plus aside from mu­sic, arts, or mod­el­ing. It’s prob­a­bly neu­tral for Bay area pro­gram­mers as­sum­ing it’s well main­tained. If you’re in­clined to­wards long hair since it seems low effort, it’s easy to buy clip­pers and keep it cut to a uniform short length your­self.

Beards are mostly neu­tral—even where long hair would be nega­tive—again as­sum­ing they are well main­tained. At a min­i­mum, trim it ev­ery few weeks and shave your neck reg­u­larly.

• From the Even Odds thread:

As­sume there are n peo­ple. Let S_i be per­son i’s score for the event that oc­curs ac­cord­ing to your fa­vorite proper scor­ing rule. Then let the to­tal pay­ment to per­son i be

$T_i=S_i-\frac{1}{n-1}\sum_{j\ne\,i}S_j$

(i.e. the per­son’s score minus the av­er­age score of ev­ery­one else). If there are two peo­ple, this is just the differ­ence in scores. The per­son makes a profit if T_i is pos­i­tive and a pay­ment if T_i is nega­tive.

This scheme is always strat­e­gyproof and bud­get-bal­anced. If the Breg­man di­ver­gence as­so­ci­ated with the scor­ing rule is sym­met­ric (like it is with the quadratic scor­ing rule), then each per­son ex­pects the same profit be­fore the ques­tion is re­solved.

• Not aware of any tour­neys with this tweak, but I use a similar ex­am­ple when I teach.

If the pay­off from ex­it­ing is zero and the mu­tual defec­tion pay­off is nega­tive, then the game doesn’t change much. Exit on the first round be­comes the unique sub­game-perfect equil­ibrium of any finite rep­e­ti­tion, and with a ran­dom end date, trig­ger strate­gies to sup­port co­op­er­a­tion work similarly to the origi­nal game.

Life is a more in­ter­est­ing if the mu­tual defec­tion pay­off is suffi­ciently bet­ter than exit. Co­op­er­a­tion can hap­pen in equil­ibrium even when the end date is known (ex­cept on the last round) since exit is a vi­able threat to pun­ish defec­tion.

• From an eco­nomics per­spec­tive, the sta­pler dis­ser­ta­tion is real. The ma­jor­ity of the time, the three pa­pers haven’t been pub­lished.

It’s also pos­si­ble to pub­lish em­piri­cal work pro­duced in a few months. The is­sue is where that ar­ti­cle is likely to be pub­lished. There’s a clear hi­er­ar­chy of jour­nals, and a low ranked pub­li­ca­tion could hurt more than it helps. Disser­ta­tion com­mit­tees have very differ­ent stan­dards de­pend­ing on the stu­dent’s am­bi­tion to go into academia. If the com­mit­tee has to write let­ters of rec to other pro­fes­sors, it takes a lot more work to be suffi­ciently novel and in­ter­est­ing. If some­one goes into in­dus­try, al­most any three pa­pers will suffice.

I’ve seen peo­ple leave be­cause they couldn’t pass course­work or be­cause they felt burnt out, but the de­gree al­most always comes con­di­tional on writ­ing some­thing and hav­ing well-cal­ibrated am­bi­tions.

• Re­sults like the Se­cond Welfare The­o­rem (ev­ery effi­cient al­lo­ca­tion can be im­ple­mented via com­pet­i­tive equil­ibrium af­ter some lump-sum trans­fers) sug­gests it must be equiv­a­lent in the­ory.

Eric Bud­ish has done some in­ter­est­ing work chang­ing the course al­lo­ca­tion sys­tem at Whar­ton to use gen­eral equil­ibrium the­ory be­hind the scenes. In the pre­vi­ous sys­tem, courses were al­lo­cated via a fake money auc­tion where stu­dents had to ac­tu­ally make bids. In the new sys­tem, stu­dents sub­mit prefer­ences and the al­lo­ca­tion is com­puted as the equil­ibrium start­ing from “equal in­comes”.

What benefits do you think a differ­ent sys­tem might provide, or what prob­lems does mon­e­tary ex­change have that you’re try­ing to avoid? Ex­tra com­pu­ta­tion and con­nec­tivity should just open op­por­tu­ni­ties for new mar­kets and dy­namic pric­ing, rather than sug­gest we need some­thing new.

• My in­tu­ition is ev­ery good al­lo­ca­tion sys­tem will use prices some­where, whether the users see them or not. The main perk of the story’s econ­omy is get­ting things you need with­out hav­ing to ex­plic­itly de­cide to buy them (ie the down-on-his-luck guy un­ex­pect­edly gifted his fa­vorite coffee), and that could be im­ple­mented through in­di­vi­d­ual AI agents rather than a cen­tral AI.

Flesh­ing out how this might play out, if I’m feel­ing sick, my AI agent no­tices and broad­casts a bid for hot soup. The agents of peo­ple nearby re­spond with offers. The low­est offer might come from some­one already in a soup shop who lives next door to me since they’ll hardly have to go out of their way. Their agent would no­tify them to buy some­thing ex­tra and de­liver it to me. Once the task is fulfilled, my agent would send the agreed-upon pay­ment. As long as the agents are well-cal­ibrated to our needs and costs, it’d feel like a great gift even if there are auc­tions and pay­ments be­hind the scenes.

For poin­t­ers, gen­eral equil­ibrium the­ory stud­ies how to al­lo­cate all the goods in an econ­omy. Depend­ing on how you squint at the model, it could be study­ing cen­tral­ized or de­cen­tral­ized mar­kets based on money or pure ex­change. A Toolbox for Eco­nomic De­sign is fairly ac­cessible tex­book on mechanism de­sign that cov­ers lots of al­lo­ca­tion top­ics.

• I’m on board with “ab­surdly pow­er­ful”. It un­der­lies the bulk of mechanism de­sign, to the point my ad­vi­sor com­plains we’ve con­fused it with the en­tirety of mechanism de­sign.

The prin­ci­ple gives us the en­tire set of pos­si­ble out­comes for some solu­tion con­cept like dom­i­nant-strat­egy equil­ibrium or Bayes-Nash equil­ibrium. It works for any search over the set of out­comes, whether that leads to an im­pos­si­bil­ity re­sult or a con­struc­tive re­sult like iden­ti­fy­ing the rev­enue-op­ti­mal auc­tion.

Given an ar­bi­trary mechanism, it’s easy (in prin­ci­ple) to find the as­so­ci­ated IC di­rect mechanism(s). The mechanism defines a game, so we solve the game and find the equil­ibrium out­comes for each type pro­file. Once we’ve found that, the IC di­rect mechanism just as­signs the equil­ibrium out­come di­rectly. For in­stance, if ev­ery­one’s equil­ibrium strat­egy in a pay-your-bid/​first-price auc­tion was to bid 90% of their value, the di­rect mechanism as­signs the item to the per­son with the high­est value and charges them 90% of their value. Since a game can have mul­ti­ple equil­ibria, we have one IC mechanism per out­come. The rev­e­la­tion prin­ci­ple can’t an­swer ques­tions like “Is there a mechanism where ev­ery equil­ibrium (as op­posed to some equil­ibrium) gives a par­tic­u­lar out­come?”

• The pa­per cited is hand­wavy and con­ver­sa­tional be­cause it isn’t mak­ing origi­nal claims. It’s pro­vid­ing a sur­vey for non-spe­cial­ists. The table I men­tioned is a sum­mary of six other pa­pers.

Some of the stud­ies as­sume work­ers in poorer coun­tries are per­ma­nently 1/​3rd or 1/​5th as pro­duc­tive as na­tive work­ers, so the es­ti­mate is based on some­thing more like a per­son trans­ferred from a $5,000 GDP/​cap­ita econ­omy to a$50,000 GDP/​cap­ita econ­omy is able to pro­duce \$10-15K in value.

• For con­text on the size of the po­ten­tial benefit, an ad­di­tional 1% mi­gra­tion rate would in­crease world GDP by about 1% (i.e. about one trillion dol­lars). The main ques­tion is the rate of mi­gra­tion if bar­ri­ers are par­tially low­ered, with es­ti­mates vary­ing be­tween 1% and 30%. Com­pletely open mi­gra­tion could dou­ble world out­put. Based on Table 2 of Cle­mens (2011)

• The is­sue is when we should tilt out­comes in fa­vor of higher cre­dence the­o­ries. Start­ing from a cre­dence-weighted mix­ture, I agree the­o­ries should have equal bar­gain­ing power. Start­ing from a more neu­tral dis­agree­ment point, like the sta­tus quo ac­tions of a typ­i­cal per­son, higher cre­dence should en­tail more power /​ votes /​ del­e­gates.

On a quick ex­am­ple, equal bar­gain­ing from a cre­dence-weighted mix­ture tends to fa­vor the lower cre­dence the­ory com­pared to weighted bar­gain­ing from an equal sta­tus quo. If the to­tal fea­si­ble set of util­ities is {(x,y) | x^2 + y^2 ≤ 1; x,y ≥ 0}, then the NBS start­ing from (0.9, 0.1) is about (0.95, 0.28) and the NBS start­ing from (0,0) with the­ory 1 hav­ing nine del­e­gates (i.e. an ex­po­nent of nine in the Nash product) and the­ory 2 hav­ing one del­e­gate is (0.98, 0.16).

If the cre­dence-weighted mix­ture were on the Pareto fron­tier, both ap­proaches are equiv­a­lent.

• For the NBS with more than two agents, you just max­i­mize the product of ev­ery­one’s gain in util­ity over the dis­agree­ment point. For Kalai-Smodor­in­sky, you con­tinue to equate the ra­tios of gains, i.e. pick­ing the point on the Pareto fron­tier on the line be­tween the dis­agree­ment point and vec­tor of ideal util­ities.

Agents could be given more bar­gain­ing power by giv­ing them differ­ent ex­po­nents in the Nash product.

• Alright, a cre­dence-weighted ran­dom­iza­tion be­tween ideals and then bar­gain­ing on equal foot­ing from there makes sense. I was imag­in­ing the par­li­a­ment start­ing from scratch.

Another al­ter­na­tive would be to use a hy­po­thet­i­cal dis­agree­ment point cor­re­spond­ing to the worst util­ity for each the­ory and giv­ing higher cre­dence the­o­ries more bar­gain­ing power. Or more bar­gain­ing power from a typ­i­cal per­son’s life (the out­come can’t be worse for any the­ory than a policy of be­ing kind to your fam­ily, giv­ing to so­cially-mo­ti­vated causes, cheat­ing on your taxes a lit­tle, tel­ling white lies, and not mur­der­ing).

• I agree that some car­di­nal in­for­ma­tion needs to en­ter in the model to gen­er­ate com­pro­mise. The ques­tion is whether we can map all the­o­ries onto the same util­ity scale or whether each agent gets their own scale. If we put ev­ery­thing on the same scale, it looks like we’re do­ing meta-util­i­tar­i­anism. If each agent gets their own scale, com­pro­mise still makes sense with­out meta-value judg­ments.

Two out­comes is too de­gen­er­ate if agents get their own scales, so sup­pose A, B, and C were op­tions, the­ory 1 has or­di­nal prefer­ences B > C > A, and the­ory 2 has prefer­ences A > C > B. Depend­ing on how much of a com­pro­mise C is for each agent, the out­come could vary be­tween

• choos­ing C (say if C is 99% as good as the ideal for each agent),

• a 5050 lot­tery over A and B (if C is only 1% bet­ter than the worst for each), or

• some other lot­tery (for in­stance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lot­tery with weight 2/​3rds on C and 1/​3rd on A gives them each 60% of the gain be­tween their best and worst)

• My read­ing of the prob­lem is that a satis­fac­tory Par­li­a­men­tary Model should:

• Rep­re­sent moral the­o­ries as del­e­gates with prefer­ences over adopted poli­cies.

• Allow del­e­gates to stand-up for their the­o­ries and bar­gain over the fi­nal out­come, ex­tract­ing con­ces­sions on vi­tal points while let­ting oth­ers poli­cies slide.

• Restrict del­e­gates’ use of dirty tricks or de­ceit.

Since bar­gain­ing in good faith ap­pears to be the core fea­ture, my mind im­me­di­ately goes to mod­els of bar­gain­ing un­der com­plete in­for­ma­tion rather than vot­ing. What are the pros and cons of start­ing with the Nash bar­gain­ing solu­tion as im­ple­mented by an al­ter­nat­ing offer game?

The two ob­vi­ous is­sues are how to trans­late del­e­gate’s prefer­ences into util­ities and what the dis­agree­ment point is. As­sum­ing a util­ity func­tion is fairly mild if the del­e­gate has prefer­ences over lot­ter­ies. Plus,there’s no util­ity com­par­i­son prob­lem even though you need car­di­nal util­ities. The lack of a nat­u­ral dis­agree­ment point is trick­ier. What in­tu­itions might be lost go­ing this route?

• It turns out the only Pareto effi­cient, in­di­vi­d­u­ally ra­tio­nal (ie ev­ery­one never gets some­thing worse than their ini­tial job), and strat­e­gyproof mechanism is Top Trad­ing Cy­cles. In or­der to make Cato bet­ter off, we’d have to vi­o­late one of those in some way.

# Strat­e­gyproof Mechanisms: Possibilities

2 Jun 2014 2:26 UTC
23 points