Cofounder of Beeminder
dreeves
To make sure I understand this concern:
It may be better to use a larger hash space to avoid an internal (in the data set) collisions, but then you lower the number of external collisions.
Are you thinking someone may want plausible deniability? “Yes, my email hashes to this entry with a terrible Brier score but that could’ve been anyone!”
This should be fine. In past years, Scott has had an interface where you could enter your email address and get your score. So the ability to find out other people’s scores by knowing their email address is apparently not an issue. And it makes sense to me that one’s score in this contest isn’t particularly sensitive private information.
Source: Comment from Scott on the ACX post announcing the results
“At some point one of those groups will be devoured by snakes” is erroneous
I wouldn’t say erroneous but I’ve added this clarification to the original question:
“At some point one of those groups will be devoured by snakes and then I stop” has an implicit “unless I roll snake eyes forever”. I.e., we are not conditioning on the game ending with snake eyes. The probability of an infinite sequences of non-snake-eyes is zero and that’s the sense in which it’s correct to say “at some point snake eyes will happen” but non-snake-eyes forever is possible in the technical sense of “possible”.
It sounds contradictory but “probability zero” and “impossible” are mathematically distinct concepts. For example, consider flipping a coin an infinite number of times. Every infinite sequence like HHTHTTHHHTHT… is a possible outcome but each one has probability zero.
So I think it’s correct to say “if I flip a coin long enough, at some point I’ll get heads” even though we understand that “all tails forever” is one of the infinitely many possible sequences of coin flips.
Just set it up in the Beeminder work Slack and I am immediately in love 😍
First forecast: Will at least 4 of us (including me) play this reindeer game? (96% probability so far)
Ooh, I think there’s a lot of implicit Beeminder criticism here that I’m eager to understand better. Thanks for writing this up!
We previously argued against similar claims—https://blog.beeminder.com/blackmail/ -- and said that the “just get the different parts of yourself to get along” school of thought was insufficiently specific about how to do that. But here you’ve suggested some smart, specific ideas and they sound good!
My other Beeminder defense is that there are certain bare minimums that you know it would be irrational to fall below. So I recommend having the Beeminder goal as insurance and then also implementing all the strategies you describe. If those strategies work and it’s easy-peasy to stay well above Beeminder’s bright red line, then wonderful. Conflict avoided. If those strategies happen to fail, Beeminder will catch you. (Also you get a nice graph of your progress, quantified-self-style.)
PS: More recently we had a post about how compatible Beeminder turns out to be with CBT which I think also argues against the dichotomy you’re implying here with Conflict vs Cooperation. https://blog.beeminder.com/cbt/
Btw, Scott mentioned having to read a bunch to figure out the subtle difference between loss aversion and the endowment effect. I attempted a full explainer: https://blog.beeminder.com/loss/
I don’t necessarily endorse it but the moral argument would go like so: “I’m definitely not going to pay to read that article so me bypassing the paywall is not hurting the newspaper. The marginal cost is zero. Stealing from a kiosk, on the other hand, deprives the newspaper of a sale (and is just obvious plain old stealing).” In other words, “I’m not stealing a newspaper from the kiosk, I’m just breaking in, photocopying it real quick, and putting it right back. No harm no foul!”
A counterargument might be that you’re contributing to demand for paywall-bypassing which does deprive the newspaper of sales, just less directly.
This list is pretty amazing (and I’m not just saying that because Beeminder is on it!) and you’ve persuaded me on multiple things already. Some comments and questions:
CopyQ: I use Pastebot and I see there are So Many of these and would love a recommendation from someone who feels strongly that there’s a particular one I should definitely be using.
Google Docs quick-create: You inspired me to make a link in my bookmarks bar (built in to Chrome) to https://doc.new which I think is simpler and just as good. (Yes, it’s kind of ridiculous that “doc.new” is their URL for that instead of something sane like ”docs.google.com/new″.)
Beeminder vs StickK: Obviously I’m absurdly biased but I can’t imagine anyone here preferring StickK and it would actually help us a ton to understand why someone might prefer StickK. Their referee feature is better, but everything else is so much worse! Especially anti-charities—those are an abomination from an EA perspective, right??
Pocket: We have an argument at https://blog.beeminder.com/pocket for why this is a big deal. (I guess I should also mention our Habitica integration; and Focusmate is probably coming soon!
Good question and good answers! Someone mentioned that the fancy/expensive Beemium plan lets you cap pledges at $0. On the non-premium plan you can cap pledges at $5, so another conceivable solution is to combine that + a conservative slope on your graph + setting alarms or something? + chalking up occasional failures, if rare enough, as effectively the cost of the service.
Or like another person said, you can make the slope zero (no commitment at all), but that may defeat the point, with the graph offering no guidance on how much you’d like to be doing.
PS: Of course this was also prompted by us nerding out about your and Marcus’s vows so thank you again for sharing this. I’m all heart-eyes every time I think about it!
Ah, super fair. Splitting any outside income 50⁄50 would still work, I think. But maybe that’s not psychologically right in y’all’s case, I don’t know. For Bee and me, the ability to do pure utility transfers feels like powerful magic!
Me to Bee while hashing out a decision auction today that almost felt contentious, due to messy bifurcating options, but then wasn’t:
I love you and care deeply about your utility function and if I want to X more than you want to Y then I vow to transfer to you U_you(Y)-U_you(X) of pure utility! [Our decision auction mechanism in fact guarantees that.]
Then we had a fun philosophical discussion about how much better this is than the hollywood concept of selfless love where you set your own utility function to all zeros in order for the other’s utility function to dominate. (This falls apart, of course, because of symmetry. Both of us do that and where does that leave us?? With no hair, an ivory comb, no watch, and a gold watchband, is where!)
Ooh, this is exciting! We have real disagreements, I think!
It might all be prefaced on this: Rather than merge finances, include in your vows an agreement to, say, split all outside income 50⁄50. Or, maybe a bit more principled, explicitly pay your spouse for their contributions to the household.
One way or another, rectify whatever unfairness there is in the income disparity directly, with lump-sum payments. Then you have financial autonomy and can proceed with mechanisms and solution concepts that require transferrable utility!
I love this so much and Bee (my spouse) and I have started talking about it. Our first question is whether you intend to merge your finances. We think you shouldn’t! Because having separate finances means having transferrable utility which puts more powerful and efficient and fair decision/bargaining mechanisms at your disposal.
My next question is why the KS solution vs the Nash solution to the bargaining problem?
But also are you sure the Shapley value doesn’t make more sense here? (There’s a Hart & Mas-Colell paper that looks relevant.) Either way, this may be drastically simplifiable for the 2-player case.
Thanks so much for sharing this. It’s so sweet and nerdy and heart-warming and wonderful! And congratulations!
Oh, Quirrell is referring to what game theorists call Cheap Talk. If the thing I’m trying to convince you of is strictly in my own brain—like whether I intend to cooperate or defect in an upcoming Prisoner’s Dilemma—then any promises I make are, well, cheap talk. This is related to costly signals and strategic commitment, etc etc.
Anyway, I think that’s the missing piece there. “Nothing you can do to convince me [about your own intentions] [using only words]”.
This is indeed a fun way to illustrate Bayesian thinking! But I have a monkey wrench! There exist people who view smileys as almost explicitly connoting passive-aggression or sarcasm. Like the whole reason to add a smiley is to soften something mean. I’m not quite sure if there are enough such people to worry about but I think that that perception of smileys is out there.
Correction to the Ainslie link: http://picoeconomics.org/breakdown.htm
Hi from the future [1]! Beeminder has a version of this built in: the one-week akrasia horizon. You can change anything about a Beeminder goal, including ending it, at any time, but the change doesn’t take effect for a week. As Katja Grace once said on Overcoming Bias: “[you] can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this).”
[1] I’m mildly terrified that it’s against the norms to reply to something this old. I’ve been thinking hard about your (Scott’s) recent ACX post, “Towards A Bayesian Theory Of Willpower,” and am digging up all your previous thoughts on the topic, so here I am.
Good thought experiment! I replied in the form of another Yudkowsky vignette. :)
Summary: “Infinity” is a perfectly coherent Cheerful Price for, say, something sufficiently repugnant to you or something very unethical. (But also you must have a finite Cheerful Price for anything, no matter how bad, if the badness happens with sufficiently small probability.)
That reminds me of this delightful and hilarious (edit: and true!) thing Eliezer said once:
Let me try to clear up the notion that economically rational agents must be cold, heartless creatures who put a money price on everything.
There doesn’t have to be a financial price you’d accept to kill every sentient being on Earth except you. There doesn’t even have to be a price you’d accept to kill your spouse. It’s allowed to be the case that there are limits to the total utility you know how to generate by spending currency, and for anything more valuable to you than that, you won’t exchange it for a trillion dollars.
Now, it *does* have to be the case for a von Neumann-Morgenstern rational agent that if a sum of money has any value to you at all, you will exchange anything else you have—or any possible event you can bring about -- *at some probability* for that sum of money. So it *is* true that as a rational agent, there is some *probability* of killing your spouse, yourself, or the entire human species that you will cheerfully exchange for $50.
I hope that clears up exactly what sort of heartless creatures economically rational agents are.
I guess in practice it’d be the tiniest shred of plausible deniability. If your prior is that alice@example.com almost surely didn’t enter the contest (p=1%) but her hash is in the table (which happens by chance with p=1/1000) then you Bayesian-update to a 91% chance that she did in fact enter the contest. If you think she had even a 10% chance on priors then her hash being in the table makes you 99% sure it’s her.