I work at the Alignment Research Center (ARC). I write a blog on stuff I’m interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
Eric Neyman
A night-watchman ASI as a first step toward a great future
Sure! Let’s say that we make a trade I buy a share of “Jesus will return in 2025” from you for 3 cents. Here’s what that means in practice:
I give 3 cents to Polymarket, to hold until the end of the year. (In return, Polymarket gives me a “yes” share, which will be worth 100 cents if Jesus returns and 0 cents if he doesn’t return.) You give 97 cents to Polymarket. (In return, Polymarket gives you a “no” share, which will be worth 100 cents if Jesus does not return and 0 cents if he does return.)
If Jesus does not return by the end of the year, you get all 100 of those cents. If he returns, I get all 100 cents.
Now, let’s say that we’ve made this trade. Fast forward to November, and you’re interested in betting on the New York mayoral election. Maybe you’d like to buy shares of “Zohran Mamdani will win the mayoral election” because it’s trading for 70 cents, but you think he’s 85% likely to win, or something. You really wish you had those 97 cents that you gave to Polymarket to hold until the end of the year, because you can make a much more profitable (in expectation) bet now!
So you return to the Jesus market, to sell your “no” share. You paid 97 cents for it, but really, you’re willing to sell it for 95 cents now. You’ll eat that 2-cent loss, because at least then you’ll get to place that really good bet on the New York market, where you think you’re profiting a lot more in expectation. Meanwhile, I’m happy to be on the other end of that trade: I bought “Jesus will return” for 3 cents, and now I get to sell out of my position for 5 cents (by trading with you), earning me a guaranteed 2 cents.
(There’s some details I’m eliding: basically, a “yes” share and a “no” share “cancel out” to 100 cents, so if you hold both 1 yes share and 1 no share, Polymarket internally just credits you 100 cents, so it’s as if you get a dollar back and don’t hold any shares at all. I didn’t want to get into that because it’s a slightly confusing detail.)
Does that make sense?
Ah oops, I now see that one of Drake’s follow-up comments was basically about this!
One suggestion that I made to Drake, which I’ll state here in case anyone else is interested:
Define a utility function: for example, utility = -(dollars paid out) - c*(variance of your estimator). Then, see if you can figure out how to sample people to maximize your utility.
I think this sort of analysis may end up being more clear-eyed in terms of what you actually want and how good different sampling methods are at achieving that.
This is a really cool mechanism! I’m surprised I haven’t seen it before—maybe it’s original :)
After thinking about it more, I have a complaint about it, though. The complaint is that it doesn’t feel natural to value the act of reaching out to someone at $X. It’s natural to value an actual sample at $X, and you don’t get a sample every time you reach out to someone, only when they respond.
Like, imagine two worlds. In world A, everyone’s fair price is below X, so they’re guaranteed to respond. You decide you want 1000 samples, so you pay $1000X. In world B, everyone has a 10% chance of responding in your mechanism. To get a survey with the same level of precision (i.e. variance), you still need to get 1000 responses, and not just reach out to 1000 people.
My suspicion is that if you’re paying per (effective) sample, you probably can’t mechanism-design your way out of paying more for people who value their time more. I haven’t tried to prove that, though.
I strongly agree. I can’t vouch for all of the orgs Ryan listed, but Encode, ARI, and AIPN all seem good to me (in expectation), and Encode seems particularly good and competent.
I have something like mixed feelings about the LW homepage being themed around “If Anyone Builds it, Everyone Dies”:
On the object level, it seems good for people to pre-order and read the book.
On the meta level, it seems like an endorsement of the book’s message. I like LessWrong’s niche as a neutral common space to rigorously discuss ideas (it’s the best open space for doing so that I’m aware of). Endorsing a particular thesis (rather than e.g. a set of norms for discussion of ideas) feels like it goes against this neutrality.
dath ilani medical drama glowfics
(fwiw I think many readers will have no idea how to parse these words. maybe put in footnote and have it link to the glowfic?)
Oh, UMA could totally mis-resolve this one. But my contention is that Polymarket would overrule such a resolution.
This is only tangential to your post, but I’m curious what you think of orexin antagonists as an insomnia treatment. Concretely, if you think that orexin agonists are a promising way to make people sleep less without making them more sleepy during the day, would it also follow that orexin antagonists merely make people sleep more, without actually making them less sleepy during the day?
Also: do you think orexin antagonists might have substantial negative side effects that are not obvious to the people taking them? (I ask this as someone who’s trying orexin antagonists to treat my insomnia.)
This seems right to me!
Recognize this
✅
and you’ll be able to shift your focus to the real work: becoming comfortable with the worst-case scenarios your anxiety is protecting you from.
any advice? Thanks!
If one reads my posts, I think it should become very clear to the reader that either ARC’s research direction is fundamentally unsound, or I’m still misunderstanding some of the very basics after more than a year of trying to grasp it.
I disagree. Instead, I think that either ARC’s research direction is fundamentally unsound, or you’re still misunderstanding some of the finer details after more than a year of trying to grasp it. Like, your post is a few layers deep in the argument tree, and the discussions we had about these details (e.g. in January) went even deeper. I don’t really have a position on whether your objections ultimately point at an insurmountable obstacle for ARC’s agenda, but if they do, I think one needs to really dig into the details in order to see that.
(ETA: I agree with your post overall, though!)
Alas, there is a $6,600 limit to how much you can donate to a political candidate (per election cycle).
My favorite example of a president being a good Bayesian is Abraham Lincoln (h/t Julia Galef):
See here and here for my attempts to do this a few years ago! Our project (which we called Pact) ultimately died, mostly because it was no one’s first priority to make it happen. About once a year I get contacted by some person or group who’s trying to do the same thing, asking about the lessons we learned.
I think it’s a great idea—at least in theory—and I wish them the best of luck!
(For anyone who’s inclined toward mechanism design and is interested in some of my thoughts around incentives for donors on such a platform, I wrote about that on my blog five years ago.)
Any chance we could get Ghibli Mode back? I miss my little blue monster :(
Ohh I see. Do you have a suggested rephrasing?
Empirically, the “nerd-crack explanation” seems to have been (partially) correct, see here.
Oh, I don’t think it was at all morally bad for Polymarket to make this market—just not strategic, from the standpoint of having people take them seriously.
Top Manifold user Semiotic Rivalry said on Twitter that he knows the top Yes holders, that they are very smart, and that the Time Value of Money hypothesis is part of (but not the whole) story. The other part has to do with how Polymarket structures rewards for traders who provide liquidity.
https://x.com/SemioticRivalry/status/1904261225057251727
Yup, that’s right!