I was chatting with someone about “okay, what actually counts as organizational adequacy?” and they said “well there are some orgs I trust...” and then they listed some orgs that seemed ranked by something like “generic trustworthiness”, rather than “trustworthy with the specific competencies needed to develop AGI safely.”
And I said “wait, do you, like, consider Lightcone Infrastructure adequate here?” [i.e. the org that makes LessWrong]
And they were said “yeah.”
And I was like “Oh, man to be clear I do not consider Lightcone Infrastructure to be a high reliability org.” And they said “Well, like, if you don’t trust an org like Lightcone here, I think we’re just pretty doomed.”
The conversation wrapped up around that point, but I want to take it as a jumping off point to explain a nuance here.
High Reliability is a particular kind of culture. It makes sense to invest in that culture when you’re working on complex, high stakes systems where a single failure could be catastrophic. Lightcone/LessWrong is a small team that is very “move fast and break things” in most of our operations. We debate sometimes how good that is, but, it’s at least a pretty reasonable call to make given our current set of projects.
We shipped some buggy code that broke the LessWrong frontpage on Petrov Day. I think that was dumb of us, and particularly embarrassing because of the symbolism of Petrov Day. But I don’t think it points at a particularly deeply frightening organizational failure, because temporarily breaking the site on Petrov Day is just not that bad.
If Lightcone decided to pivot to “build AGI”, we would absolutely need to significantly change our culture, to become the sort of org that people should legitimately trust with that. I think we’ve got a good cultural core that makes me optimistic about us making that transition if we needed to, but we definitely don’t have it by default.
I think “your leadership, and your best capabilities researchers, are actually bought into existential safety” is an important piece of being a highly reliable AI org. But it’s not the only prerequisite.
I was chatting with someone about “okay, what actually counts as organizational adequacy?” and they said “well there are some orgs I trust...” and then they listed some orgs that seemed ranked by something like “generic trustworthiness”, rather than “trustworthy with the specific competencies needed to develop AGI safely.”
And I said “wait, do you, like, consider Lightcone Infrastructure adequate here?” [i.e. the org that makes LessWrong]
And they were said “yeah.”
And I was like “Oh, man to be clear I do not consider Lightcone Infrastructure to be a high reliability org.” And they said “Well, like, if you don’t trust an org like Lightcone here, I think we’re just pretty doomed.”
The conversation wrapped up around that point, but I want to take it as a jumping off point to explain a nuance here.
High Reliability is a particular kind of culture. It makes sense to invest in that culture when you’re working on complex, high stakes systems where a single failure could be catastrophic. Lightcone/LessWrong is a small team that is very “move fast and break things” in most of our operations. We debate sometimes how good that is, but, it’s at least a pretty reasonable call to make given our current set of projects.
We shipped some buggy code that broke the LessWrong frontpage on Petrov Day. I think that was dumb of us, and particularly embarrassing because of the symbolism of Petrov Day. But I don’t think it points at a particularly deeply frightening organizational failure, because temporarily breaking the site on Petrov Day is just not that bad.
If Lightcone decided to pivot to “build AGI”, we would absolutely need to significantly change our culture, to become the sort of org that people should legitimately trust with that. I think we’ve got a good cultural core that makes me optimistic about us making that transition if we needed to, but we definitely don’t have it by default.
I think “your leadership, and your best capabilities researchers, are actually bought into existential safety” is an important piece of being a highly reliable AI org. But it’s not the only prerequisite.