I’m a 3rd year PhD student at Columbia. My academic interests lie in mechanism design and algorithms related to the acquisition of knowledge. I write a blog on stuff I’m interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
Eric Neyman
How much do you believe your results?
Great minds might not think alike
My PhD thesis: Algorithmic Bayesian Epistemology
Social behavior curves, equilibria, and radicalism
Pseudorandomness contest: prizes, results, and analysis
(Note: I work with Paul at ARC theory. These views are my own and Paul did not ask me to write this comment.)
I think the following norm of civil discourse is super important: do not accuse someone of acting in bad faith, unless you have really strong evidence. An accusation of bad faith makes it basically impossible to proceed with discussion and seek truth together, because if you’re treating someone’s words as a calculated move in furtherance of their personal agenda, then you can’t take those words at face value.
I believe that this post violates this norm pretty egregiously. It begins by saying that hiding your beliefs “is lying”. I’m pretty confident that the sort of belif-hiding being discussed in the post is not something most people would label “lying” (see Ryan’s comment), and it definitely isn’t a central example of lying. (And so in effect it labels a particular behavior “lying” in an attempt to associate it with behaviors generally considered worse.)
The post then confidently asserts that Paul Christiano hides his beliefs in order to promote RSPs. This post presents very little evidence presented that this is what’s going on, and Paul’s account seems consistent with the facts (and I believe him).
So in effect, it accuses Paul and others of lying, cowardice, and bad faith on what I consider to be very little evidence.
Edited to add: What should the authors have done instead? I think they should have engaged in a public dialogue with one or more of the people they call out / believe to be acting dishonestly. The first line of the dialogue should maybe have been: “I believe you have been hiding your beliefs, for [reasons]. I think this is really bad, for [reasons]. I’d like to hear your perspective.”
Overall numbers won’t show the English strain coming
Hi! I just wanted to mention that I really appreciate this sequence. I’ve been having lots of related thoughts, and it’s great to see a solid theoretical grounding for them. I find the notion that bargaining can happen across lots of different domains—different people or subagents, different states of the world, maybe different epistemic states—particularly useful. And this particular post presents the only argument for rejecting a VNM axiom I’ve ever found compelling. I think there’s a decent chance that this sequence will become really foundational to my thinking.
An elegant proof of Laplace’s rule of succession
Pseudorandomness contest, Round 1
[Question] Three questions about mesa-optimizers
Can group identity be a force for good?
Solving for the optimal work-life balance with geometric rationality
Pseudorandomness contest, Round 2
An exploration of exploitation bias
Puzzle 3 thoughts: I believe I can do it with
1
coins, as follows.
First, I claim that for any prime q, it is possible to choose one of q + 1 outcomes with just one coin. I do this as follows:
Let p be a probability such that (Such a p exists by the intermediate value theorem, since p = 0 gives a value that’s too large and p = 1⁄2 gives a value that’s too small.)
Flip a coin that has probability p of coming up heads exactly q times. If all flips are the same, that corresponds to outcome 1. (This has probability 1/(q + 1) by construction.)
For each k between 1 and q − 1, there are ways of getting exactly k heads out of q flips, all equally likely. Note that this quantity is divisible by q (since none of 1, …, k are divisible by q; this is where we use that q is prime). Thus, we can subdivide the particular sequences of getting k heads out of q flips into q equally-sized classes, for each k. Each class corresponds to an outcome (2 through q + 1). The probability of each of these outcomes is which is what we wanted.
Now, note that 2021*12 − 1 = 24251 is prime. (I found this by guessing and checking.) So do the above for q = 24251. This lets you flip a coin 24251 times to get 24252 equally likely outcomes. Now, since 24252 = 2021*12, just assign 12 of the outcomes to each person. Then each person will have a 1/2021 chance of being selected.
Conjecture (maybe 50% chance of being true?):
If you’re only allowed to use one coin, it is impossible to do this with fewer than 24251 flips in the worst case.
Question:
What if you can only use coins with rational probabilities?
- 31 Dec 2020 18:17 UTC; 6 points) 's comment on 2021 New Year Optimization Puzzles by (
Thanks! I’ve changed the title to “Great minds might not think alike”.
Interestingly, when I asked my Twitter followers, they liked “Alike minds think great”. I think LessWrong might be a different population. So I decided to change the title on LessWrong, but not on my blog.
(Conflict of interest note: I work at ARC, Paul Christiano’s org. Paul did not ask me to write this comment. I first heard about the truck (below) from him, though I later ran into it independently online.)
There is an anonymous group of people called Control AI, whose goal is to convince people to be against responsible scaling policies because they insufficiently constraint AI labs’ actions. See their Twitter account and website (
also anonymousEdit: now identifies Andrea Miotti of Conjecture as the director). (I first ran into Control AI via this tweet, which uses color-distorting visual effects to portray Anthropic CEO Dario Amodei in an unflattering light, in a way that’s reminiscent of political attack ads.)Control AI has rented a truck that had been circling London’s Parliament Square. The truck plays a video of “Dr. Paul Christiano (Made ChatGPT Possible; Government AI adviser)” saying that there’s a 10-20% chance of an AI takeover and an overall 50% chance of doom, and of Sam Altman saying that the “bad case” of AGI is “lights out for all of us”. The back of the truck says “Responsible Scaling: No checks, No limits, No control”. The video of Paul seems to me to be an attack on Paul (but see Twitter discussion here).
I currently strongly believe that the authors of this post are either in part responsible for Control AI, or at least have been working with or in contact with Control AI. That’s because of the focus on RSPs and because both Connor Leahy and Gabriel Alfour have retweeted Control AI (which has a relatively small following).
Connor/Gabriel—if you are connected with Control AI, I think it’s important to make this clear, for a few reasons. First, if you’re trying to drive policy change, people should know who you are, at minimum so they can engage with you. Second, I think this is particularly true if the policy campaign involves attacks on people who disagree with you. And third, because I think it’s useful context for understanding this post.
Could you clarify if you have any connection (even informal) with Control AI? If you are affiliated with them, could you describe how you’re affiliated and who else is involved?
EDIT: This Guardian article confirms that Connor is (among others) responsible for Control AI.