Sharing information about Lightcone Infrastructure
(Please note that the purpose of this post is to communicate bits of information that I expect some people would really like to know, not to be a comprehensive analysis of Lightcone Infrastructure’s efficiency.
A friend convinced me to make this post just before I was about to fly to the Bay Area for Inkhaven. Not wanting to use the skills gained due to participation against the organizers, I wrote and published the post at the beginning of the program.)
Should you donate to Lightcone Infrastructure? In my opinion, if your goal is to spend money on improving the world the most: no. In short:
The resources are and can be used in ways the community wouldn’t endorse; I know people who regret their donations, given the newer knowledge of the policies.
The org is run by Oliver Habryka, who puts personal conflicts above shared goals, and is fine with being the kind of agent others regret having dealt with.
(In my opinion, the LessWrong community has somewhat better norms, design taste, and standards than Lightcone Infrastructure.)
The cost of running/supporting LessWrong is much lower than Lightcone Infrastructure’s spending.
Lightcone Infrastructure is fine with giving a platform and providing value to those working on destroying the world
Are you a nonprofit dedicated to infrastructure that helps humanity, or a commercial conference venue that doesn’t discriminate on the basis of trying to kill everyone?
Lighthaven, a conference venue and hotel run by Lightcone Infrastructure, hosted an event with Sam Altman as a speaker.
When asked about it, Oliver said that Lightcone would be fine with providing Lighthaven as a conference venue to AI labs for AI capabilities recruiting, perhaps for a higher price as a tax.
While it’s fine for some to consider themselves businesses that don’t discriminate and platform everyone, Lighthaven is a venue funded by many who explicitly don’t want AI labs to be able to gain value from using the venue.
Some of my friends were sad and expressed regret donating to Lightcone Infrastructure upon hearing about this policy.
They donated to keep the venue existing, thinking of it as a place that helps keep humanity existing, perhaps also being occasionally rented out to keep being able to help good things. And it’s hard to blame them for expecting the venue to not host events that damage humanity’s long-term trajectory: the website of the venue literally says:
Lighthaven is a space dedicated to hosting events and programs that help people think better and to improve humanity’s long-term trajectory
They wouldn’t have made the donations it if they understood it as more of an impartial business that provides value to everyone who pays for this great conference venue, including those antithetical to the expressed goals of Lightcone Infrastructure, only occasionally using it for actually important things.
I previously donated to Lightcone because I personally benefited from the venue and wanted to say thanks (in the fuzzies category of spending, not utilons category); but now I somewhat regret even that, as I wouldn’t donate to, say, an abstract awesome Marriott venue that hosted an EAG but also would be fine with hosting AI capabilities events.
Oliver Habryka is a counterparty I regret having; he doesn’t follow planecrash!lawful neutral/good norms; be careful when talking to him
It’s one thing to have norms around whistleblowing and propagating information that needs to be shared to support community health. It’s another to want to use information shared with you for the purpose of enabling coordination with a third party with shared goals against that third party, after being asked not to.
There is a sense, well communicated in planecrash, that smart agents can just do things. If two agents have some common interest but can’t coordinate on it because that’d require sharing a piece of information that can be used adversarially, they can simply share the information and then not use it.
There is also a sense in which we can just aspire to be like that; be the kinds of agents that can be trustworthy, that even enemies sometimes want to coordinate with, and know that it would be safe to do so.
It was disheartening, when, in conversation with Oliver Habryka—someone at the very center of this community, who I expected to have certainly read planecrash, and who runs Lightcone Infrastructure—while discussing plans around a major positive event, I shared a piece of information, both assuming the person would’ve already had access to it and that it would be helpful to enable coordination between them and a third party they’ve been somewhat adversarial towards. I wasn’t aware of the amount of adversariness between them; and I wasn’t aware that the person did not, in fact, already have access to that information; but I assumed it was safe to share things, in general, that enable movement towards shared interests and that might prompt them to coordinate with someone else, speaking to someone so obviously rationalist and high-integrity.
A short bit later, I messaged Oliver:
(Also, apparently, just in case, please don’t act on me having told you that [third party] are planning to do [thing] outside of this enabling you to chat to/coordinate with them.)
It took him seven days to respond, for some reason. It was on Slack, so I don’t know when he’s read my message; but this was the longest I’ve ever had to wait for the reply from him. Throughout the seven days, my confidence in him responding in the only obvious way I expected them to slowly dropped. (To be fair to him, he’s likely been busy with some important stuff we’ve both been a bit involved with at the time and might’ve not seen it.)
Their reply was “Lol, no, that’s not how telling me things works”.
My heart dropped. This was what happens in earthly fiction with stupid characters; not with smart people who aspire to be dath ilani. It felt confusing, the way pranks can be confusing.
They added:
I won’t go public with it, but of course I am going to consider it in my plans in various ways
You can ask in-advance if I want to accept confidentiality on something, and I’ll usually say no”.
I felt like something died.
I replied:
Hmm? Confusing
To clarify, my request is not about confidentiality (though I do normally assume some default level of it when chatting about presumably sensitive topics- like, there’s stuff that has negative consequences if shared? but I haven’t asked for confidentiality here), it was mainly specifically about using information against people sharing it to coordinate. It would be very sad if people weren’t able to talk and coordinate because the action of not using against each other information needed to prompt them to talk and coordinate wasn’t available to them. Like, I’d expect people at the center of this community to be able to be at at least that level of Lawful? But I basically asked only, just in case, to please not act on it in ways adversarial to [third party](Separately, it would be pretty sad here specifically, with me repeatedly observing clear mistakes in some of why you dislike each other, where I’d bet you’ll change your mind at least somewhat, and this significantly contributes to my expectation of how good it would be for you and them to talk and coordinate)
They said:
It’s not lawful to only accept information that can be used to the benefit of other people!
Indeed that is like the central foundation of what I consider corruption
My impression is that his previous experiences at CEA and Leverage taught him that secrets are bad: if you’re not propagating information about all the bad things someone’s doing, they’re gaining power and keep doing bad things, and that’s terrible.
I directionally agree: it’s very important to have norms about whistleblowing, about making sure people are aware of people who are doing low-integrity stuff, about criticisms propagating rather than being silenced.
At the same time, it’s important to separate (1) secrets that are related to opsec and being able to do things that can be damaged by being known in advance and (2) secrets related to some people doing bad stuff which they don’t want to be known.
One can have good opsec, and at the same time have norms on whistleblowing; about telling others or going public with what’s important to propagate and at the same time not share what people tell you because you might need to know it, with expectation that you don’t share it unless it’s related to some bad thing that someone’s doing that should be propagated.
I came to Lighthaven to talk to Oliver about a project that me, him, and a third party were all helping with/related to; chatting about this project and related things; and sharing information about some related plans of that third party, to enable Oliver to coordinate with that third party. Oliver and that third party have identical declared goals related to making sure AI doesn’t kill everyone; but Oliver dislikes that third party, wants it to not gain power, and wouldn’t coordinate with it by default.
So: I shared information with Oliver about some plans, hoping it would enable coordination. This information was in no way of the kind that can be whistleblowed on; it was just of the kind that can be used to damage the plans relating to a common goal.
And immediately after the conversation, I messaged Oliver asking him to, just in case, only use this information to coordinate with the third party, as I shared this information to enable coordination between the two entities who I perceived as having basically common goals, except that one doesn’t want another to have much power, all else being equal (for reasons that I thought were misguided and hoped some of that could be solved if they talked; they did talk, but Oliver didn’t seem to change his mind).
He later said he already told some people and since he didn’t agree to the conditions before hearing the information, he can share it, even though wouldn’t go public with it.
After a while in a conversation that involved me repeatedly referring to Lawfulness of the kind exhibited by Keltham from Yudkowsky’s planecrash, he said that he didn’t actually read planecrash. (A Keltham, met with a request like that relating to a third party with very opposite goals, would sigh, saying the request should’ve been made in advance, and then not screw someone over, if they’re not trying to screw you over and it’s not an incredibly important thing.)
My impression of him is that he had no concept of being the kind of entity that others- even enemies with almost opposite goals- are not worse off by having dealt and coordinated with; has no qualms about doing this to those who perceive him as being generally friendly. I think he just learned that keeping secrets is bad in general, and so he doesn’t by default, unless explicitly agrees to.
My impression is that he regrets being told the information, given the consequences of sharing it with others. But a smart enough agent—a smart enough human—should be able to simply not use the information in ways that make you regret hearing it. Like, you’re not actually required to share information with others, especially if this information is not about someone doing bad stuff and is instead about someone you dislike doing good stuff that could be somewhat messed up by being known in advance.
I am very sympathetic to the idea that people should be able to whistleblow and not be punished for it (I think gag orders should not exist, I support almost everything that FIRE is doing, etc.); yet Oliver’s behavior is not the behavior of someone who’s grown-up and can be coordinated with on the actually important stuff.
Successful movements can do opsec when coordinating with each other, even if they don’t like each other; they can avoid defecting when coordination is useful, even if it’s valuable to screw the other part of the movement over.
There are people at Lightcone I like immensely; and I agree with a lot of what Oliver is saying and plausibly the person who upvoted the most of his EA Forum comments; and yet I would be very wary of coordinating with him in the future.
It’s very sad to not feel safe to share/say things that are useful to people working on preventing extinction, when being around Oliver.
(A friend is now very worried that his LessWrong DMs can be read and used by Oliver, who has admin access.
I ask Oliver to promise that he’s not going to read established users’ messages without it being known to others at Lightcone Infrastructure and without a justification such as suspected spam, and isn’t going to share the contents of the messages. I further ask Lightcone to establish policies about situations in which DMs and post drafts can be looked at, and promise to follow these policies.)
(Lightcone Infrastructure’s design taste is slightly overrated and the spending seems high)
This is not a strongly held view; I greatly enjoy the LessWrong interface, reactions, etc., but I think the design taste of the community as a whole is better than Lightcone Infrastructure’s.
One example: for a while, on Chrome on iOS, it was impossible to hold a link on the frontpage to open it in a new tab, because the posts opened on the beginning of the tap to reduce delays.
While processing events on the beginning of taps and clicks and loading the data to display on hovers is awesome in general, it did not work in this particular case because people (including me) really want to be able to open multiple tabs with many posts from the frontpage.
It took taking this to the Slack and other people agreeing to change this design decision.
Lightcone Infrastructure seems to be spending much more than would’ve been sufficient to keep the website running. My sense, though I could be wrong here, is that it shouldn’t cost many hundreds of thousands of dollars to keep a website running and moderated and even to ship new features with the help of the community (and perhaps software engineers from countries with significantly lower salaries per level of competence).
Have I personally derived much value from Lightcone Infrastructure?
The website’s and community’s existence is awesome, and has been dependent first on Yudkowsky and later on LW 2.0. I have derived a huge amount of value from it; found friends; engaging and important conversations; and incredible amount of fun. Even though now I wouldn’t feel particularly good about donating to Lightcone to support Lighthaven, I wouldn’t feel particularly bad about donating to the part of their work which is supporting the website, as a thanks, from the fuzzies budget.
But I would not donate to Lightcone Infrastructure from the budget of donations to improve the world.
A lot of the claims about me, and about Lightcone, in this post are false, which is sad. I left a large set of comments on a draft of thist post, pointing out many of them, though not all of them got integrated before the post was published (presumably because this post was published in a rush as Mikhail is part of Inkhaven, and decided to make this his first post of Inkhaven, and only had like 2 hours to get and integrate comments).
A few quick ones, though this post has enough errors that I mostly just want people to really not update on this at all:
This is technically true, but of course the whole question lies in the tax! I think the tax might be quite large, possible enough to cover a large fraction of our total operational costs for many months (like a 3-4x markup on our usual cost of hosting such an event, or maybe even more). If you are deontologically opposed to Lighthaven ever hosting anything that has anything even vaguely to do with capability companies, no matter the price, then yeah, I think that’s a real criticism, but I also think it’s a very weird one. Even given that, at a high enough price, the cost to the labs would be virtually guaranteed to be more than they would benefit from it, making it a good idea even if you are deontologically opposed to supporting AI companies.
The promise that Mikhail asked me to make was, as far as I understood it, to “not use any of the information in the conversation in any kind of adversarial way towards the people who the information is about”. This is a very strong request, much stronger than confidentiality (since it precludes making any plans on the basis of that information that might involve competing or otherwise acting against the interests of the other party, even if they don’t reveal any information to third parties). This is not a normal kind of request! It’s definitely not a normal confidentiality request! Mikhail literally clarified that he thought that it would only be OK for me to consider this information in my plans, if that consideration would not hurt the interests of the party we were talking about.
And he sent the message in a way that somehow implied that I was already supposed to have signed up for that policy, as if it’s the most normal thing in the world, and with no sense that this is a costly request to make (or that it was even worth making a request at all, and that it would be fine to prosecute someone for violating this even if it had never been clarified at all as an expectation from the other side).
This is not true! My policy is simply that you should not assume that I will promise to keep your secrets after you tell me, if you didn’t check with me first. If you tell me something without asking me for confidentiality first, and then you clarify that the information is sensitive, I will almost always honor that! But if you show up and suddenly demand of me that I will promise that I keep something a secret, without any kind of apology or understanding that this is the kind of thing you do in advance, of course I am not going to just do whatever you want. I will use my best judgement!
My general policy here is that I will promise to keep things secret retroactively, if I would have agreed to accept the information with a confidentiality request in advance. If I would have rejected your confidentiality request in advance, you can offer me something for the cost incurred by keeping the secret. If you don’t offer me anything, I will use my best judgement and not make any intense promises but broadly try to take your preferences into account in as much as it’s not very costly, or offer you some weaker promise (like “I will talk about this with my team or my partner, but won’t post it on the internet”, which is often much cheaper than keeping a secret perfectly).
Roughly the aim here is to act in a timeless fashion and to not be easily exploitable. If I wouldn’t have agreed to something before, I won’t agree to it just because you ask me later, without offering me anything to make up the cost to me!
And to repeat the above again, the request here was much more intense! The request, as I understood it, was basically “don’t use this information in any kind of way that would hurt the party the information is about, if the harm is predictable”, which I don’t even know how to realistically implement at a policy level. Of course if I end up in conflict with someone I will use my model of the world which is informed by all the information I have about someone!
And even beyond that, I don’t think I did anything with the relevant information that Mikhail would be unhappy about! I have indeed been treating the informations as sensitive. This policy might change if at some point the information looks more valuable to communicate. Mikhail seems only angry about me not fully promising to do what he wants, without him offering me anything in return, and despite me thinking that I would not have agreed to any kind of promise like this in the first place if I was asked to do that before receiving the information (and would have just preferred to never receive the information in the first place).
We’ve had internal policies here for a long time! We never look at DMs unless one of the users in the conversation reports a conversation as spam. Sometimes DM contents end up in error logs, but I can’t remember a time where I actually saw any message contents instead of just metadata in the 8 years that I’ve been working on LW (but we don’t have any special safeguards against it).
We look at drafts that were previously published. We also sometimes look at early revisions of posts that have been published for debugging purposes (not on-purpose, but it’s not something we currently have explicit safeguards or rules about). We never look at unpublished drafts, unless the user looks pretty clearly spammy, and never for established users.
Look, we’ve had this conversation during our fundraiser. There is zero chance of running an operation like LW 2.0 long-term without that not somehow costing at least $200k/yr. Even if someone steps up and does it for free, that is still them sacrificing at least $200k in counterfactual income, if they are skilled enough to run LessWrong in the first place. I think even at a minimum skeleton crew, you would be looking at at least $300k of costs.
This is false! Most of our spending is LessWrong spending these days (as covered in our annual fundraiser post). All of our other projects are much closer to paying for itself. Most of the cost of running Lightcone is the cost of running LessWrong (since it’s just a fully unmonetized product).
IDK, I am pretty sad about this post. I am happy to clarify my confidentiality policies and other takes on honoring retroactive deals (which I am generally very into, and have done a lot of over the years), if anyone ends up concerned as a result of it.
I will be honest in that it does also feel to me like this whole post was written in an attempt at retaliation when I didn’t agree with Mikhail’s opinions on secrets and norms. Like, I don’t think this post was written in an honest attempt at figuring out whether Lightcone is a good donation target.
I can confirm; Oliver keeps many secrets from me, that he has agreed to others, and often keeps information secret based on implicit communication (i.e. nobody explicitly said that it was secret, but his confident read of the situation is that it was communicated with that assumption). I sometimes find this frustrating because I want to know things that Oliver knows :P
Speaking generally, many parties get involved in zero-sum resource conflicts, and sometimes form political alliances to fight for their group to win zero-sum resource conflicts. For instance, if Alice and Bob are competing to get the same job, or Alice is trying to buy a car for a low price and Bob is trying to sell it to her for a high price, then if Charlie is Alice’s ally, she might hope that Charlie all will take actions that help her get more/all of the resources in these conflicts.
Allies of this sort also expect that they can share information that is easy to use adversarially against them between each other, with the expectation it will be consistently used either neutrally or in their favor by the allies.
Now, figuring out who your allies are is not a simple process. There are no forms involved, there are no written agreements, it can be fluid, and picked up in political contexts by implicit signals. Sometimes you can misread it. You can think someone is allied, tell them something sensitive, then realize you tricked yourself and just gave sensitive information to someone. (The opposite error also occurs, where you don’t realize someone is your ally and don’t share info and don’t pick up all the value on the table.)
My read here is that Mikhail told Habryka some sensitive information about some third party “Jackson”, assuming that Habryka and Jackson were allied. Habryka, who was not allied with Jackson in this way, was simply given a scoop, and felt free to share/use that info in ways that would cause problems for Jackson. Mikhail said that Habryka should treat it as though they were allies, whereas Habryka felt that he didn’t deserve it and that Mikhail was saying “If I thought you would only use information in Jackson’s favor when telling you the info, then you are obligated to only use information in Jackson’s favor when using the info.” Habryka’s response is “Uh, no, you just screwed up.”
(Also, after finding out who “Jackson” is from private comms with Mikhail, I am pretty confused why Mikhail thought this, as I think Habryka has a pretty negative view of Jackson. Seems to me simply like a screw-up on Mikhail’s part.)
I don’t know how costly/beneficial this screw up concretely was to humanity’s survival, but I guess that total cost would’ve been lower if Habryka as a general policy were more flexible in when the sensitivity of information has to be negotiated.
Like, with all this new information I now am a tiny bit more wary of talking in front of Habryka. I may blabber out something that has a high negative expected utility if Habryka shares it (after conditioning on the event that he shares it) and I don’t have a way to cheaply fix that mistake (which would bound the risk).
And there isn’t an equally strong opposing force afaict? I can imagine blabbering out something that I’d afterwards negotiate to keep between us, where Habryka cannot convince me to let him share it, and yet it would’ve been better to allow him to share it.
Tbc, my expectations for random people are way worse, but Habryka seems below average among famous rationalists now? I rn see & feel in average zero pull to adjust my picture of the average famous rationalist up or down, but seems high variance since I didn’t ever try to learn what policies rationalists follow wrt negotiating information disclosure. I definitely didn’t expect them to use policies mentioned in planecrash outside fun low-stake toy scenarios.
Feel free to update on “Oliver had one interaction ever with Mikhail in which Oliver refused to make a promise that Mikhail thought reasonable”, but I really don’t think you should update beyond that. Again, the summaries in this post of my position are very far away from how I would describe them.
There is a real thing here, which if you don’t know you should know, which is that I do really think confidentiality and information-flow constraints are very bad for society. They are the cause of as far as I can tell a majority of major failures in my ecosystem in the last few years, and mismanagement of e.g. confidentiality norms was catastrophic in many ways, so I do have strong opinions about this topic! But the summary of my positions on this topic is really very far from my actual opinions.
Thx, I think I got most of this from your top level comment & Mikhail’s post already. I strongly expect that I do not know your policy for confidentiality right now, but I also expect that once I do I’d disagree with it being the best policy one can have, just based on what I heard from Mikhail and you about your one interaction.
My guess is that refusing the promise is plausibly better than giving it for free? But I guess that there’d have been another solution where 1) Mikhail learns not to screw up again, and 2) you get to have people talk more freely around you to a degree that’s worth loosing the ability to make use of some screw-ups, and 3) Mikhail compensates you in case that 1+2 is still too far away from a fair split of the total expected gains.
I expect you’ll say that 2) sounds pretty negative to you, and that you and the community should follow a policy where there’s way less support for confidentiality, which can be achieved by exploiting screw-ups and by sometimes saying no if people ask for confidentiality in advance, so that people who engage in confidentiality either leave the community or learn to properly share information openly.
I mostly just want people to become calibrated about the cost of sharing information with strings attached. It is quite substantial! It’s OK for that coordination to happen based on people’s predictions of each other, without needing to be explicitly negotiated each time.
I would like it to be normalized and OK for someone to signal pretty heavily that they consider the cost of accepting secrets, or even more intensely, the cost of accepting information that can only be used to the benefit of another party, to be very high. People should therefore model that kind of request as likely to be rejected, and so if you just spew information onto the other party, and also expect them to keep it secret or to only be used for your benefit, that the other party is likely to stop engaging with you, or to tell you that they aren’t planning to meet your expectations.
I think marginally the most important thing to do is to just tell people who demand constraints on information, without wanting to pay any kind of social cost for it, to pound sand.
(A large part of the goals of this post is to communicate to people that Oliver considers the cost of accepting information to be very high, and make people aware that they should be careful around Oliver and predict him better on this dimension, not repeating my mistake of expecting him not to do so much worse than a priest of Abadar would.)
I think you could have totally written a post that focused on communicating that, and it could have been a great post! Like, I do think the cost of keeping secrets is high. Both me and other people at Lightcone have written quite a bit about that. See for example “Can you keep this confidential? How do you know?”
This post focuses on communicating that! (+ being okay with hosting ai capabilities events + less important misc stuff)
Well, presumably, if the lab is willing to make the trade, they at least believe that they’re benefiting from the trade, on net.
I don’t have a strong opinion on what kinds of trades you should make with AI labs, but “set a tax high enough that it’s not worth it for the lab on net” doesn’t seem like a totally crazy deontological rule?
Sure, and my policy above doesn’t rule that out. The only thing I said is that there is some price for which we’ll do it (my guess is de-facto there are probably some clearing prices here, as opposed to zero, but that would be a different conversation).
I agree that promise is overly restrictive.
‘Don’t make my helping you have been a bad idea for me’ is a more reasonable version, but I assume you’re already doing that in your expectation, and it makes sense for different people to take the other’s expectation into account different amounts for this purpose.
Yeah, I think this is a good baseline to aspire to, but of course the “my helping you” is the contentious point here. If you hurt me, and then also demand that I make you whole, then that’s not a particularly reasonable request. Why should I make you whole, I am already not whole myself!
Sometimes interactions are just negative-sum. That’s the whole reason why it does usually make sense to check-in beforehand before doing things that could easily turn out to be negative sum, which this situation clearly turned out to be!
Yep, that request would be identical, and is what I meant.
Oliver said “The promise that Mikhail asked me to make was, as far as I understood it, to ‘not use any of the information in the conversation in any kind of adversarial way towards the people who the information is about’.”.
Oliver understood you to be asking him not to use the information to hurt anyone involved, which is way more restrictive, and in fact impossible for a human to do perfectly.
Unless he meant something more specific by “any kind of adversarial way”, which promise wouldn’t get you what you want.
If you meant the reasonable thing, and said it clearly, I agree Oliver’s misunderstanding is surprising and probably symptomatic of not reading planecrash.
Yeah, no, initially, I simply asked: just in case, please don’t use [the information I shared it explicitly for the purpose of enabling Oliver to coordinate with the third party] except to coordinate with the third party, expecting “sure, no problem” in response.
Then, after hearing Oliver wouldn’t agree to confidentiality given that I haven’t asked him for it in advance, I tried to ask: okay, sure, if you have such a high cost of/principles relating to not telling other people things, please at least don’t try to tell people specifically for the purpose of harming the third party, making it a bad idea to have tried to coordinate. He then said that nope, he wouldn’t agree to even that partial confidentiality, because if, e.g., someone was considering whether it’s important to harm the third party now rather than later and telling them the information that I shared would’ve moved them towards harming the third party earlier, Oliver would want to share information with that someone so that they could harm the third party. (And also said he already told some people.)
(He ended up talking to the third party; but an opportunity to use the information adversarially did not turn up afaik.)
It’s plausible that he misunderstood what I was asking for throughout, but he had no intention of avoiding making it such that me trying to coordinate with him would have been a bad idea for me.
(See also this comment.)
No, I didn’t say anything remotely like this! I have no such policy! I don’t think I ever said anything that might imply such a policy. I only again clarified that I am not making promises about not doing these things to you. I would definitely not randomly hand out information to anyone who wants to harm the third party.
At this point I am just going to stop commenting every time you summarize me inaccurately, since I don’t want to spend all day doing this, but please, future readers, do not assume these summaries are accurate.
I have clarified like 5 times that this isn’t because you didn’t ask in advance. If you had asked in advance I would have rejected your request as well, it’s just that you would have never told me in the first place.
This is also not what you asked for! You said “I just ask you to not use this information in a way designed to hurt [third party]”, which is much broader. “Not telling people” and “not using information” are drastically different. I have approximately no idea how to commit to “not use information for purpose X”. Information propagates throughout my world model. If I end up in conflict with a third party I might want to compete with them and consider the information as part of my plans. I couldn’t blind myself to that information when making strategic decisions.
Your message:
’Hypothetical scenario (this has not happened and details are made up):
Me and [name] are discussing the landscape of [thing] as it regards to Lightcone strategy. [name] is like “man, I feel like if I was worried that other people soon try to jump into the space, then we really should probably just back [a thing] because probably something will soon cement itself in the space”. I would be like “Oh, well, I think [third party] might do stuff”. Rafe is like “Oh, fuck, hmm, that’s bad”. I am like “Yep, seems pretty fucked. Plausibly we should really get going on writing up that ‘why [third party’s person] seems like a low-integrity dude’ post we’ve been thinking about”. [name] is like “Yeah, maybe. Does really seem quite bad if [third party’s person] tries to position himself here centrally. Actually, I think maybe [name] from CEA Comm health was working on some piece about [third party’s person]? Seems like she should know [third party’s person] is moving into the space, since it seems a bit more urgent if that’s happening”. I am like “Yep, seems right”.’
You didn’t say that when we were talking about it! You implied that since I didn’t ask in advance, you are not bound by anything; you did mention “I can keep things confidential if you ask me in advance, but of course I wouldn’t accept a request to receive private information about [third party] being sketchy that I can only use to their benefit?”
(“Being sketchy” is not how I’d describe the information. It was about an idea that Oliver is not okay with the third party working on, but is okay with others working on, because he doesn’t like the third party for a bunch of reasons and thinks it’s bad if they get more power, as per my understanding.)
I did not and would not have demanded somehow avoiding propagating the information. If you were like, “sorry, I obviously can’t actually not propagate this information in my world model and promise it won’t reflect on my plans, but I won’t actively try to use outside of coordinating with the third party and will keep it confidential going forward”, that would’ve been great and expected and okay.
I asked to not apply effort to using the information against the third party. I didn’t ask to apply effort to not be aware of the information in your decision-making, to keep separate world-models, or whatever. Confidentiality with people outside your team and not going hard on figuring out how to strategically share or use this information to cause damage to the third party’s interests would’ve been understandable and acceptable.
Yeah, I honestly think the above is pretty clear?
I do not think it at all describes a policy of “if someone was trying to harm the third party, and having this information would cause them to do it sooner, then I would give them the information”. Indeed, it seems really very far away from that! In the above story nobody is trying to actively harm anyone else as far as I can tell? I certainly would not describe “CEA Comm Health team is working on a project to do a bunch of investigations, and I tell them information that is relevant to how highly they should prioritize those investigations” as being anything close to “trying to harm someone directly”!
No, I literally said “Like, to be clear, I definitely rather you not have told me”. And then later “Even if I would have preferred knowing the information packaged with the request”. And my first response to your request said “You can ask in-advance if I want to accept confidentiality on something, and I’ll usually say no”.
Sure, but I also wouldn’t have done that! The closest deal we might have had would have been a “man, please actually ask in advance next time, this is costly and makes me regret having that whole conversation in the first place. If you recognize that as a cost and owe me a really small favor or something, I can keep it private, but please don’t take this as a given”, but I did not (and continue to not) have the sense that this would actually work.
Maybe I am being dense here, and on first read this sounded like maybe a thing I could do, but after thinking more about it I do not know what I am promising if I promise I “won’t actively try to use [this information] outside of coordinating with the third party”. Like, am I allowed to write it in my private notes? Am I allowed to write it in our weekly memos as a consideration for Lightcone’s future plans? Am I not allowed to think the explicit thought “oh, this piece of information is really important for this plan that puts me in competition with this third party, better make sure to not forget it, and add it to my Anki deck?
Like, I am not saying there isn’t any distinction between “information passively propagating” and “actively using information”, but man, it feels like a very tricky distinction, and I do not generally want to be in the business of adding constraints to my private planning and thought-processes that would limit how I can operate here, and relies on this distinction being clear to other people. Maybe other people have factored their mind and processes in ways they find this easy, but I do not.
This would’ve worked!
(Other branches seem less productive to reply to, given this.)
I changed my mind; at least in the case of my sharing information with you, if you were perfectly trustworthy you’d totally just defer to my beliefs for not making me worse off as a result. But, as you said, plausibly even in this easy case being perfect is way too hobbling for humans ’cause of infohazards.
Disappointed to see this kind of note.
The post is a lot less polished than it could’ve been and doesn’t make its points as strongly as I’d like, but to the best of my knowledge, none of the criticisms in this post are false.
All of the comments that you feel like weren’t integrated contained arguments that I consider invalid.
I didn’t reply to all of your comments because didn’t see much sense in that.
Supporting the idea that the criticisms are false with a note on “Mikhail must’ve not had time” is weird, especially given that I explicitly told you all that I find the arguments in your comments invalid and didn’t want to reply in detail from my phone.
This was not the idea. The idea was that it would be okay to provide positive value to AI companies, given enough compensation to Lighthaven.
People who donated to keep Lighthaven going are not particularly happy about this (from n=2 people).
This is not the request that I made. I asked to not use information adversarially: to not try to cause harm to the third party using it.
Which (1) I was not made aware of by you prior to making the post and (2) is dependent on you not having ways to use the information to hurt the third party. This post is not made because you actually did something bad that hurt the third party; it’s made because you’re the kind of person who would, according to yourself.
That’s not what you did.
You didn’t signal in any way that any of that stuff was an option.
lol, no. It’s made because others are very sad about the details and told me I should write about them; it’s made because I don’t want people do end up regretting working with you; it’s made vaguely at the beginning of Inkhaven (I didn’t want it to be the first post, though) to not made people sad about helping me write well when it’s published.
I am pretty sure Lightcone is not a good donation target as someone who donated personally significant amounts to Lightcone and talked to friends who previously have or considered donating large amounts to Lightcone, and then regretted that/decided not to after learning about all this.
You said: “I don’t think I have any reason to ask you to not consider it in your plans insofar as these considerations are not hurting their interests or whatever” when I asked for clarification. This clearly implies you are asking me to not consider this information in my plans if doing so would hurt their interests!
You also clarified multiple other times that you were asking me to promise to not use this information in any future conflicts or anything like that, or to make plans on their basis that would somehow interfere with the other party’s plans, even if I thought they would cause grave harm if I didn’t interfere.
I am really not very optimistic about making agreements with you in-particular, based on how the one conversation I’ve ever had with you went. So no, that is not an option, though I will still try to do good by what I think you care about. But I do not want to risk you forming more expectations about how I will behave which you then get angry at me for and try to strongarm me into various things I don’t want to do. It’s not been fun dealing with you on this!
This is just false. I am not going around trying to randomly hurt people. All I am saying, and will continue to say, is that I am not promising you that I will use this information only in ways you approve of, or the third party would approve of. The bar is much higher than simply “an opportunity presents itself to hurt the third party!”, as I have told you multiple times!
Feel free to do a survey on this! I am sure almost all of our donors would of course have an exchange rate where instead of them donating, we just provide epsilon value to an AI company, and then they can use their money to do other good things in the world. I would be extremely surprised if your statement was true in any kind of generality.
Almost none of the information in this post is correct! If they updated because of takes like this post, then I think they just made a mistake.
To anyone else: please reach out to me if you somehow made updates in this direction, I would be highly surprised if you end up endorsing it. The only thing that seems plausible to me as a real update in the space is that for a high enough tax we will host basically arbitrary events at Lighthaven (not literally arbitrary, but like, I think we should have some price for basically anything, and I expect the tax to sometimes be very high). If you really don’t want that you should at least let me know! You can also leave comments here and I’ll be glad to respond.
Separately, I think it’s good to invite people like Sam Altman to events like the Progress Conference, and would of course want Sam to be at important diplomatic meetings. If you think that’s always bad, then I do think Lighthaven might be bad! I am definitely hoping for it to facilitate conversations between many people I think are causing harm for the world.
Look, “three hours on a Saturday night” is not the right amount of time to give someone if you are asking them for input on a post like this. I mean, you could have just not asked for input at all, but it’s clearly not an amount of time that should give you any confidence you got the benefits of input.
Seems like the rapid-fire nature of an InkHaven writing sprint is a poor fit for a public post under a personally-charged summary bullet like “Oliver puts personal conflict ahead of shared goals”.
High-quality discourse means making an effort to give people the benefit of the doubt when making claims about their character. It’s worth taking time to carefully follow our rationalist norms of epistemic rigor, productive discourse, and personal charity.
I’d expect a high-evidence post about a very non-consensus topic like this to start out in a more norm-calibrated and self-aware epistemic tone, e.g. “I have concerns about Oliver’s decisionmaking as leader of Lightcone based on a pattern of incidents I’ve witnessed in his personal conflicts (detailed below)”.
I don’t particularly want to damage the interests of Lighcone Infrastructure; I want people who’d find this information important for their decision-making to be aware of this information, and most of the value is in putting the information out there. People can make their own inferences about whether they agree with my and my friends’ conclusions, and I don’t particularly think that spending much resources on making a better-argued posts presenting the same information stronger is a very important thing.
I’m not particularly satisfied with the quality of this post, but that’s my aesthetic preferences to a much larger extent than it is a judgement on the importance of putting this post out there.
(I would also feel somewhat bad about writing this post well after deriving better writing skills from Inkhaven, which means I wanted to publish it early on.)
Is this supposed to be a negative thing? I don’t think there is any obligation that people read any particular work of fiction in order to run an infrastructure project...
i feel like if you’re runnin a lightcone infrastructure project, you’re lowkey supposed to have read the existing literature on decision theory(sorry)For a long time I felt that the people accusing rationalists of being a cult were ridiculous. This comment made me wonder whether I’ve been dismissing their claims too quickly.
I don’t think the decision theory described here is correct. (I’ve read Planecrash.)
Specifically, there’s an idea in glowfic that it should be possible for lawful deities to follow a policy wherein counterparties can give them arbitrary information, on the condition that information is not used to harm the information-provider. This could be as drastic as “I am enacting my plan to assassinate you now, and would like you to propose edits that we both would want to make to the plan”!
I think this requires agreement ahead of time, and is not the default mode of conversation. (“Can I tell you something, and you won’t get mad?” is a request, not a magic spell to prevent people from getting mad at you.) I think it also is arguably something that people should rarely agree to. Many people don’t agree to the weaker condition of secrecy, because the information they’re about to receive is probably less valuable than the costs of partitioning their mind or keeping information secret. In situations where you can’t use the information against your enemies (like two glowfic gods interacting), the value of the information is going to be even lower, and situations where it makes sense to do such an exchange even rarer. (Well, except for the part where glowfic gods can very cheaply partition their minds and so keeping secrets or doing pseudohypothetical reasoning is in fact much cheaper for them than it is for humans.)
That is, I think this is mostly a plot device that allows for neat narratives, not a norm that you should expect people to be expecting to follow or get called out.
[This is not a complete treatment of the issue; I think most treatments of it only handle one pathway, the “this lets you get information you can use for harm reduction” pathway, and in fact in order to determine whether or not an agent should do it, you must consider all relevant pathways. But I think the presumption should not be “the math pencils out here”, and I definitely don’t think the math pencils out in interacting with Oli. I think characterizing that as “Oli is a bad counterparty” instead of something like “Oli doesn’t follow glowfic!lawful deity norms” or “I regret having Oli as a counterparty” is impolite.]
I see your point and initially agreed that “Oliver is a bad counterparty” is indeed not polite and intended to change that, but saw that I actually wrote “Oliver is not a good counterparty”.
That was produced by “Oliver is the kind of counterparty you might regret having dealt with, as I have”.
It’s less of an impolite judgement than “Oliver is a bad counterparty”, but if you think it reads the same, I’ll try to change that to be more polite while still expressing that I think it often makes sense for people to be careful around him.
I agree with you on decision theory. A lawful evil god would indeed be fine with being Oliver here.
Keltham wouldn’t, though.
Yep, it is not a norm I would’ve expected someone random to follow; the reason for the expectation was some combination of “we’re working on a common project” and “the Lightcone Infrastructure team, on a wifi with the password ‘wearethelight’, wouldn’t be so much significantly worse than a priest of Abadar”.
An, like, if Oliver asked for payment to offset for the cost of upholding secrecy, it would’ve been fine; he said “lol no” instead, a week after the request, and would like to use the information to hurt the third party given an opportunity.
Please stop misquoting me, come on, I have clarified this like 15 times now. Please. How many more times must I say this? All I am saying is that I am not committing to never do anything with information of this kind that hurts the third party, that is a drastically different kind of thing!
lw logistics thing, I’m annoyed that various replies are down voted rather than disagree voted,- even if readers find their tone not to lw standard they’re an important enough exchange within the context of the original post and these threads they shouldn’t get close to automatically hidden by being in negatives. Disageeevote or comment saying the tone is bad but keep these positive so future readers can find these easily
You appear to be asking people to coordinate to circumvent the explicit design of this website. Downvotes are intended to be used on comments with an inflammatory or otherwise unproductive tone, and replies with low vote counts aren’t hidden by accident!
If you want to argue that the website designers did a poor job deciding what to show and how to handle particular types of votes, you should actually make that argument.
(For example, I have strong-disagree-downvoted your comment because I strongly disagree with it, and weak-regular-downvoted your comment because you couldn’t be bothered to use correct grammar, which adds a little unnecessary friction to reading it, but is not a big deal.)
I think this whole thread is a waste of time and I don’t want to engage with it. I definitely think both the post and Mikhail’s comments should be downvoted, and think others should downvote them too!
Like, please model the costs of upvoting here. If a comment is bad, please just downvote it. Please don’t do the weird thing where you think the comment is bad, oh, but it would be so spicy and interesting if the comment was upvoted instead and so I could get more replies out of the people the comments are demanding attention from. These kinds of threads are super costly to engage in.
Like, if you want to know more information, just write comments yourself and ask them. Nobody is going to be happy if for some reason you force me to engage with Mikhail more. I am happy to answer questions but engaging with Mikhail on this is just beyond frustrating at this point.
I’m not asking you to engage with Mikhail more, I believe I understand it’s frustrating given your extensive prior conversations that still led to this post being made.
Nevertheless, I have found all these comments informative as well as op.
The post says Mikhail sent
and that you replied “lol, no” after a week.
I generally don’t want to clash with you as I respect a lot of your public takes etc, but for the same reasons you’re publicly disagreeable I do think it’s worth pointing my disagreement here. Unless you were already on colloquial terms with Mikhail, I find it rude you’d answer “lol no” to that specific request, notably given it used “please”. Even if it was an unreasonable ask, a “sorry but no” would have sufficed.
At the object level, as board member of enais and french centre for ai safety, I don’t even take Mikhail’s message as a surprising or unreasonable ask, unless interpreted stringently. Ofc if the formulation was “please make sure to act indistinguishably, even when assessed by a future superintelligence, on this info”, then a lol no is fine, but if Mikhail sent me this message I’d interpret it as asking the 80⁄20 reasonable effort and say I broadly agree tho will take other stuff into account too. In fact I overall guess (or want to believe) that you in fact do broadly agree (you prefer for people to tell you stuff, and soft commit to not indirectly harming people who tell you stuff if you can avoid it) and that you and Mikhail disagree because you’re interpreting Mikhail to be dogmatic and strict about his ask (which might actually be the case).
As an interested third party who generally would like to to work with LightConeInfra and you, unrelated to Mikhail’s specific asks, I’m curious for if you broadly agree to put some non trivial decision weight on not using info people give you in ways they strongly disagree with, even if they didn’t ask you to precomit to that, even if they were mistaken in some assumptions. (If you later get that info from other places you’re ~released from the first obligations, tho this shouldn’t be gamed)
No, what I did is reply with “lol, no” followed by about 3000 words of explanation across a 1-2 hour chat conversation, detailing my decision procedures, and what I am and am not happy to do. Like, I really went into a huge amount of detail, gave concrete specific examples, and elaborated what I would do. Much of this involved Mikhail insisting on a very specific interpretation of what reasonable conduct is and clarifying multiple times that yes, he wouldn’t want me to use information like this under any circumstance in any kind of way adversarial to the third party the information is about, and that it would be unreasonable for me to reject such a request.
Of course! See my general process described above. If you tell me something in secret, or ask me to put some kind of constraint on information, I will check whether I would have accepted that information with that constraint in advance. If I would have, I am happy to agree to it afterwards. Similarly, if I think you have some important preference, but you just forgot to ask me explicitly, or we didn’t have time to discuss it, or it’s just kind of obvious that you have this preference, I will do the same.
I have a bunch more thoughts, but I don’t super want to prop up this comment section by writing stuff that I actually think is worth reading in general. I’ll post my more cleaned-up thoughts somewhere else and link them.
Your entire reply, after a week, was:
Many words followed only after I expressed surprise and started the discussion (“about 3000 words of explanation, and a 1-2 hour chat conversation” is false, there were fewer than 2k words from your side in the entire conversation, many of which were about the third party, unrelated to your explanations of your decision procedures etc., a discussion of making a bet that you ended up not taking, some facts that you got wrong, etc.)
I think you’re misrepresenting what I asked; I asked you to not use it adversarially towards the third party, as it seemed to me as a less strong demand than confidentiality, especially given that you already shared it with people and also said you want to be able to share information and think people like Eliezer are wrong about all the keeping-secrets stuff.
Among the almost 2000 words, you did not describe this procedure even once.
And maybe I completely misinterpreted what you wrote, but from your messages, I had the impression of quite the opposite: that you think it is insane to expect people to use information in ways that align with important preferences.
When I just extracted my messages from the thread I was referencing and threw them into a wordcounter I got 2,370 words in my part of the conversation (across three parallel threads), which is close enough to 3,000 that I feel good about my estimate. I do now realize that about 500 of those were a few weeks later (but still like a month ago), so I would have now said more like 2000 words to refer to that specific 1-2 hour conversation (do appreciate the correction, though I think in this context the conversation a few weeks later makes sense to include).
I brought it up as a consideration a few times. (Example: “Like, to be clear, I definitely rather you not have told me instead of demanding that [I] ‘only use the information to coordinate’ afterwards”). I agree I didn’t outline my whole decision-making procedure, but I did explain .
Sorry, I am not parsing this. My guess is you meant to say something else than “important preferences” here?
It’s plausible I am still not understanding what you are asking. To be clear, what you asked for seemed to me substantially costlier than confidentiality (as I communicated pretty early on after you made your request). I have hopefully clarified my policies sufficiently now.
This kind of stuff is hard and we are evidently not on the same page about many of the basics, and that’s part of why I don’t feel comfortable promising things here, since my feeling is that you feel pretty upset already about me having violated something you consider an important norm, and I would like to calibrate expectations.
This is my statement, using my words, not a quote.
If there was an opportunity to make the third party worse off by sharing this information with others, you would do so.
I mean, sure, you can believe that for whatever reason. It’s definitely not something I said, and something I explicitly disclaimed like 15 times now!
Maybe Lightcone Infrastructure can just allow earmarking donations for LessWrong, if enough people care about that criticism.
This has some disagree votes. What’s wrong with this idea?
Relatedly, probably the wrong thread but somewhat relevant, I don’t feel great about my donations to a nonprofit funding their “hotel/event venue business” (as I would call it). Is it that Lighthaven wants to offer some services to some groups/events at a discount, and donations are subsidizing this?
If so, Lightcone should probably make the case that they are cost-competitive with doing this for other event venues (e.g., donations being used to rent a Marriott or Palace of Fine Arts). There’s clearly aesthetic differences, and maybe Lighthaven is the literal best event venue in the world by my preferences. But this is a nontrivial argument. (I didn’t see this argument made in a quick skim of the 2024 fundraising post)
The nice thing about Lighthaven is that it mostly funds itself! Our current expected net-spending on Lighthaven is about 10% of our budget, largely as a result of subsidizing events and projects here that couldn’t otherwise exist. I think looking at that marginal expenditure Lighthaven is wildly cost-effective if you consider any of the organizations that run events here that we subsidize to be cost-effective.
I feel confused about the notion that people only want to donate to a thing if they will be on the hook for needing to donate every year forevermore to keep it afloat, as opposed to donating to cause it to get its business in order and then it can sustain itself.
I’ve actually found Oliver to be a rather good counterparty, especially when I’ve had disagreements or conflicts with him (and I have had a few). My experience is that he takes extreme care to behave in good faith, and that while my and his values do not completely align, he is safe to coordinate with. I have mixed feelings about him gaining more power, but that’s simply because we have a difference in values, not because I expect him to lie, cheat, or steal.
In your story, Oliver sounds like he behaved reasonably. In fact, I find it remarkable that you felt you could publish this while staying in a venue partially owned by Oliver at an event partially run by Oliver, and be confident that you won’t be retaliated against. I feel that in itself speaks to his good faith.
Great that you’ve had a positive experience!
My post is that poeple should be careful about things he’s not explicitly committed to that are not about basic deontology (see the post). (And I don’t feel like starting a reply with “lol no” a week later was particularly good faith; friends tell me it was rude.) Our values align a lot, and yet.
I was not confident that I would not be retaliated against (though mostly I expected that because of the social pressure, it would be very surprising if he did); but mostly, I pretty much didn’t think about it and generally, I ignore the possibility of retaliation when speaking up/doing what’s right.
Hearing a secret can create moral, legal, or strategic costs. Once you know it, you may be forced to act or conceal, both of which can carry risk. You could tell me something that makes it awkward for me to interact with some people or that forces me to lie. I don’t necessary want such secrets. So why should people accept retroactive secrecy? I don’t know the truth here but charitably he had already told someone else of the information before you asked for secrecy or he read that part.
As someone who donated to lightcone in the past, I think LessWrong and Lighthaven are great places which provide enormous values. Seems worth a few million, they have permanent engineers on staff, you can get feedback for your posts from real people for free.
When I posted Current Safety Training Techniques Do Not Fully Transfer to Frontier Models, I later randomly happened to see a Meta AI researcher using a screenshot from it in a conference presentation. I had no contact with them, so that reach was entirely organic. It showed how LessWrong helps safety research circulate beyond its own circle. I also found Lighthaven unusually productive during my MATS work this summer, with good focus. Like you, I am also doing inkhaven right now and will see how useful I will find it in the end. The physical environment genuinely seemed optimized for deep work and I also think it makes me mentally feel good to be here compared to other co-working spaces.
There is a very small number of actually pro valid reasoning organizations trying to help save the world. Only a very tiny number of people actually support sane AI safety in the sense of stopping the current race and not building Superintelligence with anything close to current techniques. I think this place existing should probably be worth a lot more to humanity.
When I saw that sama had visited and gave a talk in lighthaven I felt it was a good thing. Cutting all connection to OpenAI religiously does not seem helpful, for what it is worth sama might be an AI CEO that the safety community can hope to influence a little bit despite all his flaws. Maintaining some ties here could be useful, though I don’t particularly expect anything to come out of this.
About DM messages, I didn’t have the impression that the messages here would be encrypted or specifically protected from admins. I think that would be weird to share some secret in the chat function of lesswrong, seems like a minimalist feature to share some feedback on posts or perhaps to exchange other information. I think it’s probably good it exists. I certainly don’t see any reason to think they are acting bad faith here.
I never really interacted much with habryka myself, but what I know of the other lightcone staff seems like they are great people.
Still would like to talk about your views on AI at some point during inkhaven.
It seems pretty unreasonable to expect people to “aspire to be dath ilani” because they’re part of or even central to, the LessWrong or rationality community.
dath ilan is one particular, interesting, projection of what a saner world might look like, from a guy who inspired a lot of us. But is by no means the normative consensus! There are lots of details of dath ilan that I disagree with, in the sense that I claim they’re unrealistic, unworkable, unfair, or just wrong.
I’m definitely inspired by lots of the ideas put forward in Planecrash. It’s influenced both my ethics and my epistemology. Its one of the books that has most influenced me.
But it seems crazy for people to assume that I’m holding myself to the standards of this fictional world, when I haven’t declared that I’m trying to do that.
I would like to say one positive thing about this post, by the way, which is that it takes quite a lot of intellectual courage to post something so negative about a person helping run a workshop you are currently at and will remain at for weeks, while surrounded by people who like and respect that person as a core pillar of the community that you are in and talking to with your post. I think your willingness to post this is a positive trait of yours; I just hope future accusations are a little better-grounded.
It seems to me that Habryka would not have valued the information highly enough to agree to the confidentiality terms as a condition of receiving it, based on his top level comment.
(tbh, “Don’t use this information against the interests of one of your adversaries” is something I’ve taken as a condition of information in the past, but have generally negotiated pretty carefully ahead of time and have only accepted as a retroactive condition twice, and both times only for good friends that I had an ongoing high-trust relationship with (and even then there were exceptions in the policy that I decided on). Generally the furthest I’m willing to go is “don’t use this information against my own interests”)
A question I have for OP is: “in the spirit of planecrash-style lawfulness, did you offer to compensate him for agreeing to your terms after the fact? That intuitively feels like the next step after receiving a response of ‘no deal’”
A pair of questions I have for habryka is “approximately what price would you have quoted for the retrocommitment?” and “are you generally willing to retrocommit in cases where the proposed terms are less onerous or you are offered compensation?”
My understanding is that Habryka spent hours talking to the third party as a result of receiving the information.
I mistakenly assumed a pretty friendly/high-trust relationship in this context due to previous interactions with the Lightcone team and due to the nature of the project we (me and Habryka) were both helping with.
I think the interests of everyone involved (me, Habryka, and the third party) are all very aligned, but Habryka disagrees with my assessment (a part of the reason of sharing the information was to make him talk to the third party and figure out that they’re a lot more aligned than Habryka assumed).
I did not make the offer, because from the context of the DMs had assumed that Habryka’s problem with the idea was keeping secrets at all, for deontology-related reasons and not because of the personal price of how complicated that is. I would’ve been happy to pay a reasonable price.
(Elsewhere, Habryka said the price would’ve been “If you recognize that as a cost and owe me a really small favor or something, I can keep it private, but please don’t take this as a given”.)
Whatever is supposed to show up here, isn’t.
Huh, it’s displayed this way to me:
The text “the website of the venue literally says” appears twice in your post. The first time it appears seems to be a mistake and isn’t followed by a quotation.