Excellent news. Considered together with the announcement of AI scientists endorsing a statement in favour of researching how to make AI beneficial, this is the best weeks for AI safety that i can remember.
Taken together with the publication of Superintelligence, founding of FLI and CSER, and teansition of SI into a research organisation MIRI, it’s becoming clearer that the last few years have started to usher in a new chapter in AI safety.
I know that machine learning capabilities are also increasing but let’s celebrate successes like these!
Thanks for your courage, Zoe!
Personally, I’ve tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity—I was told “Geoff has previously speculated to me that you are ‘throwaway’, the author of the 2018 basic facts post”. Firstly, I very much don’t appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But yes, throwaway/anonymoose is me—I posted anonymously so as to avoid adverse consequences from friends who got more involved than me. But I’m not throwaway2, anonymous, or BayAreaHuman—those three are bringing evidence that is independent from me at least.
I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to just stating “basic” or “common knowledge” facts; the facts cut through the spin.
Continuing in that spirit, I personally can attest that much of what you have said is true, and the rest congruent with the picture I built up there. They dogmatically viewed human nature as nearly arbitrarily changeable. Their plan was to study how to change their psychology, to turn themselves into Elon Musk type figures, to take over the world. This was going to work because Geoff was a legendary theoriser, Connection Theory had “solved psychology”, and the resulting debugging tools were exceptionally powerful. People “worked” for ~80 hours a week—which demonstrated the power of their productivity coaching.
Power asymmetries and insularity were present to at least some degree. I personally didn’t encounter an NDA, or talk of “demons” etc. Nor did I get a solid impression of the psychological effects on people from that short stay, though of course there must have been some.
What’s frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I’ve visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!). While on the other hand, their productive output was… also like a 2/10? It’s indefensible. But still only a fraction of the relevant information is in the open.
As you say, it’ll take time for people to build common understanding, and to come to terms with what went down. I hope the cover you’ve offered will lead some others to feel comfortable sharing their experiences, to help advance that process.