Cofounder of Beeminder
PS: Of course this was also prompted by us nerding out about your and Marcus’s vows so thank you again for sharing this. I’m all heart-eyes every time I think about it!
Ah, super fair. Splitting any outside income 50⁄50 would still work, I think. But maybe that’s not psychologically right in y’all’s case, I don’t know. For Bee and me, the ability to do pure utility transfers feels like powerful magic!
Me to Bee while hashing out a decision auction today that almost felt contentious, due to messy bifurcating options, but then wasn’t:
I love you and care deeply about your utility function and if I want to X more than you want to Y then I vow to transfer to you U_you(Y)-U_you(X) of pure utility! [Our decision auction mechanism in fact guarantees that.]
Then we had a fun philosophical discussion about how much better this is than the hollywood concept of selfless love where you set your own utility function to all zeros in order for the other’s utility function to dominate. (This falls apart, of course, because of symmetry. Both of us do that and where does that leave us?? With no hair, an ivory comb, no watch, and a gold watchband, is where!)
Ooh, this is exciting! We have real disagreements, I think!
It might all be prefaced on this: Rather than merge finances, include in your vows an agreement to, say, split all outside income 50⁄50. Or, maybe a bit more principled, explicitly pay your spouse for their contributions to the household.
One way or another, rectify whatever unfairness there is in the income disparity directly, with lump-sum payments. Then you have financial autonomy and can proceed with mechanisms and solution concepts that require transferrable utility!
I love this so much and Bee (my spouse) and I have started talking about it. Our first question is whether you intend to merge your finances. We think you shouldn’t! Because having separate finances means having transferrable utility which puts more powerful and efficient and fair decision/bargaining mechanisms at your disposal.
My next question is why the KS solution vs the Nash solution to the bargaining problem?
But also are you sure the Shapley value doesn’t make more sense here? (There’s a Hart & Mas-Colell paper that looks relevant.) Either way, this may be drastically simplifiable for the 2-player case.
Thanks so much for sharing this. It’s so sweet and nerdy and heart-warming and wonderful! And congratulations!
Oh, Quirrell is referring to what game theorists call Cheap Talk. If the thing I’m trying to convince you of is strictly in my own brain—like whether I intend to cooperate or defect in an upcoming Prisoner’s Dilemma—then any promises I make are, well, cheap talk. This is related to costly signals and strategic commitment, etc etc.
Anyway, I think that’s the missing piece there. “Nothing you can do to convince me [about your own intentions] [using only words]”.
This is indeed a fun way to illustrate Bayesian thinking! But I have a monkey wrench! There exist people who view smileys as almost explicitly connoting passive-aggression or sarcasm. Like the whole reason to add a smiley is to soften something mean. I’m not quite sure if there are enough such people to worry about but I think that that perception of smileys is out there.
Correction to the Ainslie link: http://picoeconomics.org/breakdown.htm
Hi from the future ! Beeminder has a version of this built in: the one-week akrasia horizon. You can change anything about a Beeminder goal, including ending it, at any time, but the change doesn’t take effect for a week. As Katja Grace once said on Overcoming Bias: “[you] can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this).”
 I’m mildly terrified that it’s against the norms to reply to something this old. I’ve been thinking hard about your (Scott’s) recent ACX post, “Towards A Bayesian Theory Of Willpower,” and am digging up all your previous thoughts on the topic, so here I am.
Good thought experiment! I replied in the form of another Yudkowsky vignette. :)
Summary: “Infinity” is a perfectly coherent Cheerful Price for, say, something sufficiently repugnant to you or something very unethical. (But also you must have a finite Cheerful Price for anything, no matter how bad, if the badness happens with sufficiently small probability.)
That reminds me of this delightful and hilarious (edit: and true!) thing Eliezer said once:
Let me try to clear up the notion that economically rational agents must be cold, heartless creatures who put a money price on everything.There doesn’t have to be a financial price you’d accept to kill every sentient being on Earth except you. There doesn’t even have to be a price you’d accept to kill your spouse. It’s allowed to be the case that there are limits to the total utility you know how to generate by spending currency, and for anything more valuable to you than that, you won’t exchange it for a trillion dollars.Now, it *does* have to be the case for a von Neumann-Morgenstern rational agent that if a sum of money has any value to you at all, you will exchange anything else you have—or any possible event you can bring about -- *at some probability* for that sum of money. So it *is* true that as a rational agent, there is some *probability* of killing your spouse, yourself, or the entire human species that you will cheerfully exchange for $50.I hope that clears up exactly what sort of heartless creatures economically rational agents are.
Let me try to clear up the notion that economically rational agents must be cold, heartless creatures who put a money price on everything.
There doesn’t have to be a financial price you’d accept to kill every sentient being on Earth except you. There doesn’t even have to be a price you’d accept to kill your spouse. It’s allowed to be the case that there are limits to the total utility you know how to generate by spending currency, and for anything more valuable to you than that, you won’t exchange it for a trillion dollars.
Now, it *does* have to be the case for a von Neumann-Morgenstern rational agent that if a sum of money has any value to you at all, you will exchange anything else you have—or any possible event you can bring about -- *at some probability* for that sum of money. So it *is* true that as a rational agent, there is some *probability* of killing your spouse, yourself, or the entire human species that you will cheerfully exchange for $50.
I hope that clears up exactly what sort of heartless creatures economically rational agents are.
Interesting! It hadn’t occurred to me that this could be read as any kind of repudiation of “shut up and multiply”. My previous comment on this post takes at stab at reconciling Cheerful Prices with my own extreme shut-up-and-multiply way of thinking.
Oh my goodness I love this. I’m actually so philosophically on board that I’m confused about treating Cheerful Prices as single real numbers. In my homo-economicus worldview, there exists a single price at which I’m exactly indifferent and then my cheerfulness goes up smoothly/continuously from there. It feels very arbitrary to pick something on that continuum and call it “the” cheerful price I have.
(My answer is to turn the nerdery up to 11 and compute a Shapley value, etc etc, but let me save that for another time or place. Jacob Falkovich and I have been talking about jointly blogging about this. We’ll definitely want to tie it in to the concept of Cheerful Prices if we do!)
Translated into this delightful new language of Cheerful Prices, the rough version of my approach is like so:
I as the buyer name my lowest possible Cheerful Price (where I just barely find it worth it) and you as the seller name your highest possible Cheerful Price (above which it’s just not worth it to you) and we settle on the mean of those two.
But maybe the point of Cheerful Prices is to simplify that. Let one person on one side of the trade make a guess about the consumer surplus and name something in that range. I.e., by naming my Cheerful Price I’m saying that at that price I’d be getting a big enough chunk of the consumer surplus that I don’t need to know the size of your chunk. If you, as my counterparty, feel the same then we’re golden.
Really good points. It’s funny, I have a draft of a similar point about personal behavior change that I tried to make as provocative-sounding as possible:
http://doc.dreev.es/carbonfoot (Trying To Limit Your Personal Carbon Footprint Hurts The Environment)
But note the PS where I suggest a counterargument: making personal sacrifices for climate change may shape your identity, drive you to greater activism, and make your activism and climate evangelism more persuasive (to those who don’t appreciate the economics and game theory of it).
Nice! I’ve heard a similar idea called a “talent stack” or “skill stack” but explaining it in terms of staking out a chunk of the Pareto frontier is much better.
Coincidentally, I just wrote a post explaining the idea of Pareto dominance—http://blog.beeminder.com/pareto—in case that’s useful to anyone.
Thank you! See above (“Better to not have people feel like their desperation is being capitalized on.”) for my response to your first question. And we actually believe that our system is, in practice if not in theory, strategy-proof. It’s explicitly ok to game the system to our hearts’ delight. It seems to be quite robust to that. Our utilities tend to either be uncannily well-matched, in which case it’s kind of a coin flip who wins, or they’re wildly different, but we never seem to have enough certainty about how different they’ll be for it to be fruitful to distort our bids much.
The strategy of “just say a number such that you’re torn about whether you’d rather win or lose” seems to be close enough to optimal.
How about adding a tiny bit of ambiguity (or evasion of the direct question) and making up for it with more effusiveness, eg, “it’s not only my job but it feels really good to know that I’m helping you so I really want you to bug me about even trivial-seeming things!” All true and all she’s omitting is her immediate annoyance but that is truly secondary, as she points out below about first-order vs second-order desires.
Yes, we’re super keen to make sure the efficient thing happens regardless of the initial distribution of resources/responsibilities/property-rights/etc. And we use yootling as a bargaining mechanism to make that happen. In general we’re always willing to shove work to each other or redistribute resources as efficiency dictates, using payments to make that always be fair.
In practice the sealed-bid version seems to be ungameable, at least for us! None of the problems you mentioned have arisen. My parents have tried this and had more problems but as far as I could tell it always involved contention about what to consider to be joint 50⁄50 decisions. Bethany and I seem to have no problem with that, using the heuristic of “when in doubt, just call it a 50⁄50 decision and yootle for it”.
Fixed and fixed. Thank you!