Sharing information about Lightcone Infrastructure

(Please note that the purpose of this post is to communicate bits of information that I expect some people would really like to know, not to be a comprehensive analysis of Lightcone Infrastructure’s efficiency.

A friend convinced me to make this post just before I was about to fly to the Bay Area for Inkhaven. Not wanting to use the skills gained due to participation against the organizers, I wrote and published the post at the beginning of the program.)

Should you donate to Lightcone Infrastructure? In my opinion, if your goal is to spend money on improving the world the most: no. In short:

  • The resources are and can be used in ways the community wouldn’t endorse; I know people who regret their donations, given the newer knowledge of the policies.

  • The org is run by Oliver Habryka, who puts personal conflicts above shared goals, and is fine with being the kind of agent others regret having dealt with.

  • (In my opinion, the LessWrong community has somewhat better norms, design taste, and standards than Lightcone Infrastructure.)

  • The cost of running/​supporting LessWrong is much lower than Lightcone Infrastructure’s spending.

Lightcone Infrastructure is fine with giving a platform and providing value to those working on destroying the world

Are you a nonprofit dedicated to infrastructure that helps humanity, or a commercial conference venue that doesn’t discriminate on the basis of trying to kill everyone?

Lighthaven, a conference venue and hotel run by Lightcone Infrastructure, hosted an event with Sam Altman as a speaker.

When asked about it, Oliver said that Lightcone would be fine with providing Lighthaven as a conference venue to AI labs for AI capabilities recruiting, perhaps for a higher price as a tax.

While it’s fine for some to consider themselves businesses that don’t discriminate and platform everyone, Lighthaven is a venue funded by many who explicitly don’t want AI labs to be able to gain value from using the venue.

Some of my friends were sad and expressed regret donating to Lightcone Infrastructure upon hearing about this policy.

They donated to keep the venue existing, thinking of it as a place that helps keep humanity existing, perhaps also being occasionally rented out to keep being able to help good things. And it’s hard to blame them for expecting the venue to not host events that damage humanity’s long-term trajectory: the website of the venue literally says:

Lighthaven is a space dedicated to hosting events and programs that help people think better and to improve humanity’s long-term trajectory

They wouldn’t have made the donations it if they understood it as more of an impartial business that provides value to everyone who pays for this great conference venue, including those antithetical to the expressed goals of Lightcone Infrastructure, only occasionally using it for actually important things.

I previously donated to Lightcone because I personally benefited from the venue and wanted to say thanks (in the fuzzies category of spending, not utilons category); but now I somewhat regret even that, as I wouldn’t donate to, say, an abstract awesome Marriott venue that hosted an EAG but also would be fine with hosting AI capabilities events.

Oliver Habryka is a counterparty I regret having; he doesn’t follow planecrash!lawful neutral/​good norms; be careful when talking to him

It’s one thing to have norms around whistleblowing and propagating information that needs to be shared to support community health. It’s another to want to use information shared with you for the purpose of enabling coordination with a third party with shared goals against that third party, after being asked not to.

There is a sense, well communicated in planecrash, that smart agents can just do things. If two agents have some common interest but can’t coordinate on it because that’d require sharing a piece of information that can be used adversarially, they can simply share the information and then not use it.

There is also a sense in which we can just aspire to be like that; be the kinds of agents that can be trustworthy, that even enemies sometimes want to coordinate with, and know that it would be safe to do so.

It was disheartening, when, in conversation with Oliver Habryka—someone at the very center of this community, who I expected to have certainly read planecrash, and who runs Lightcone Infrastructure—while discussing plans around a major positive event, I shared a piece of information, both assuming the person would’ve already had access to it and that it would be helpful to enable coordination between them and a third party they’ve been somewhat adversarial towards. I wasn’t aware of the amount of adversariness between them; and I wasn’t aware that the person did not, in fact, already have access to that information; but I assumed it was safe to share things, in general, that enable movement towards shared interests and that might prompt them to coordinate with someone else, speaking to someone so obviously rationalist and high-integrity.

A short bit later, I messaged Oliver:

(Also, apparently, just in case, please don’t act on me having told you that [third party] are planning to do [thing] outside of this enabling you to chat to/​coordinate with them.)

It took him seven days to respond, for some reason. It was on Slack, so I don’t know when he’s read my message; but this was the longest I’ve ever had to wait for the reply from him. Throughout the seven days, my confidence in him responding in the only obvious way I expected them to slowly dropped. (To be fair to him, he’s likely been busy with some important stuff we’ve both been a bit involved with at the time and might’ve not seen it.)

Their reply was “Lol, no, that’s not how telling me things works”.

My heart dropped. This was what happens in earthly fiction with stupid characters; not with smart people who aspire to be dath ilani. It felt confusing, the way pranks can be confusing.

They added:

I won’t go public with it, but of course I am going to consider it in my plans in various ways
You can ask in-advance if I want to accept confidentiality on something, and I’ll usually say no”.

I felt like something died.

I replied:

Hmm? Confusing
To clarify, my request is not about confidentiality (though I do normally assume some default level of it when chatting about presumably sensitive topics- like, there’s stuff that has negative consequences if shared? but I haven’t asked for confidentiality here), it was mainly specifically about using information against people sharing it to coordinate. It would be very sad if people weren’t able to talk and coordinate because the action of not using against each other information needed to prompt them to talk and coordinate wasn’t available to them. Like, I’d expect people at the center of this community to be able to be at at least that level of Lawful? But I basically asked only, just in case, to please not act on it in ways adversarial to [third party]

(Separately, it would be pretty sad here specifically, with me repeatedly observing clear mistakes in some of why you dislike each other, where I’d bet you’ll change your mind at least somewhat, and this significantly contributes to my expectation of how good it would be for you and them to talk and coordinate)

They said:

It’s not lawful to only accept information that can be used to the benefit of other people!

Indeed that is like the central foundation of what I consider corruption

My impression is that his previous experiences at CEA and Leverage taught him that secrets are bad: if you’re not propagating information about all the bad things someone’s doing, they’re gaining power and keep doing bad things, and that’s terrible.

I directionally agree: it’s very important to have norms about whistleblowing, about making sure people are aware of people who are doing low-integrity stuff, about criticisms propagating rather than being silenced.

At the same time, it’s important to separate (1) secrets that are related to opsec and being able to do things that can be damaged by being known in advance and (2) secrets related to some people doing bad stuff which they don’t want to be known.

One can have good opsec, and at the same time have norms on whistleblowing; about telling others or going public with what’s important to propagate and at the same time not share what people tell you because you might need to know it, with expectation that you don’t share it unless it’s related to some bad thing that someone’s doing that should be propagated.

I came to Lighthaven to talk to Oliver about a project that me, him, and a third party were all helping with/​related to; chatting about this project and related things; and sharing information about some related plans of that third party, to enable Oliver to coordinate with that third party. Oliver and that third party have identical declared goals related to making sure AI doesn’t kill everyone; but Oliver dislikes that third party, wants it to not gain power, and wouldn’t coordinate with it by default.

So: I shared information with Oliver about some plans, hoping it would enable coordination. This information was in no way of the kind that can be whistleblowed on; it was just of the kind that can be used to damage the plans relating to a common goal.

And immediately after the conversation, I messaged Oliver asking him to, just in case, only use this information to coordinate with the third party, as I shared this information to enable coordination between the two entities who I perceived as having basically common goals, except that one doesn’t want another to have much power, all else being equal (for reasons that I thought were misguided and hoped some of that could be solved if they talked; they did talk, but Oliver didn’t seem to change his mind).

He later said he already told some people and since he didn’t agree to the conditions before hearing the information, he can share it, even though wouldn’t go public with it.

After a while in a conversation that involved me repeatedly referring to Lawfulness of the kind exhibited by Keltham from Yudkowsky’s planecrash, he said that he didn’t actually read planecrash. (A Keltham, met with a request like that relating to a third party with very opposite goals, would sigh, saying the request should’ve been made in advance, and then not screw someone over, if they’re not trying to screw you over and it’s not an incredibly important thing.)

My impression of him is that he had no concept of being the kind of entity that others- even enemies with almost opposite goals- are not worse off by having dealt and coordinated with; has no qualms about doing this to those who perceive him as being generally friendly. I think he just learned that keeping secrets is bad in general, and so he doesn’t by default, unless explicitly agrees to.

My impression is that he regrets being told the information, given the consequences of sharing it with others. But a smart enough agent—a smart enough human—should be able to simply not use the information in ways that make you regret hearing it. Like, you’re not actually required to share information with others, especially if this information is not about someone doing bad stuff and is instead about someone you dislike doing good stuff that could be somewhat messed up by being known in advance.

I am very sympathetic to the idea that people should be able to whistleblow and not be punished for it (I think gag orders should not exist, I support almost everything that FIRE is doing, etc.); yet Oliver’s behavior is not the behavior of someone who’s grown-up and can be coordinated with on the actually important stuff.

Successful movements can do opsec when coordinating with each other, even if they don’t like each other; they can avoid defecting when coordination is useful, even if it’s valuable to screw the other part of the movement over.

There are people at Lightcone I like immensely; and I agree with a lot of what Oliver is saying and plausibly the person who upvoted the most of his EA Forum comments; and yet I would be very wary of coordinating with him in the future.

It’s very sad to not feel safe to share/​say things that are useful to people working on preventing extinction, when being around Oliver.

(A friend is now very worried that his LessWrong DMs can be read and used by Oliver, who has admin access.

I ask Oliver to promise that he’s not going to read established users’ messages without it being known to others at Lightcone Infrastructure and without a justification such as suspected spam, and isn’t going to share the contents of the messages. I further ask Lightcone to establish policies about situations in which DMs and post drafts can be looked at, and promise to follow these policies.)

(Lightcone Infrastructure’s design taste is slightly overrated and the spending seems high)

This is not a strongly held view; I greatly enjoy the LessWrong interface, reactions, etc., but I think the design taste of the community as a whole is better than Lightcone Infrastructure’s.

One example: for a while, on Chrome on iOS, it was impossible to hold a link on the frontpage to open it in a new tab, because the posts opened on the beginning of the tap to reduce delays.

While processing events on the beginning of taps and clicks and loading the data to display on hovers is awesome in general, it did not work in this particular case because people (including me) really want to be able to open multiple tabs with many posts from the frontpage.

It took taking this to the Slack and other people agreeing to change this design decision.

Lightcone Infrastructure seems to be spending much more than would’ve been sufficient to keep the website running. My sense, though I could be wrong here, is that it shouldn’t cost many hundreds of thousands of dollars to keep a website running and moderated and even to ship new features with the help of the community (and perhaps software engineers from countries with significantly lower salaries per level of competence).

Have I personally derived much value from Lightcone Infrastructure?

The website’s and community’s existence is awesome, and has been dependent first on Yudkowsky and later on LW 2.0. I have derived a huge amount of value from it; found friends; engaging and important conversations; and incredible amount of fun. Even though now I wouldn’t feel particularly good about donating to Lightcone to support Lighthaven, I wouldn’t feel particularly bad about donating to the part of their work which is supporting the website, as a thanks, from the fuzzies budget.

But I would not donate to Lightcone Infrastructure from the budget of donations to improve the world.