My impression is that few (one or two?) of the safety people who have quit a leading lab did so to protest poor safety policies, and of those few none saw staying as a viable option.
While this isn’t amazing evidence, my sense is there have been around 6 people who quit who in-parallel to them announcing their leave called out OpenAI’s reckless attitude towards risk (at various levels of explicitness, but quite strongly in all cases by standard professional norms).
It’s hard to say that people quit “to protest safety policies”, but they definitely used their leaving to protest safety policies. My sense is almost everyone who left in the last year (Daniel, William, Richard, Steven Adler, Miles) did so with a pretty big public message.
I don’t think Miles’ or Richard’s stated reasons for resigning included safety policies, for example.
But my broader point is that “fewer safety people should quit leading labs to protest poor safety policies” is basically a non-sequitor from “people have quit leading labs because they think they’ll be more effective elsewhere”, whether because they want to do something different or independent, or because they no longer trust the lab to behave responsibly.
Hmm, you have a very different read of Richard’s message than I do. I agree Miles’ statement did not reason through safety policies, but IMO his blogging since then has included a lot of harsh words for OpenAI, in a way that at least to me made the connection clear (and I think also to many others, but IDK, it’s still doing some tea-leaf reading).
FWIW I think of “OpenAI leadership being untrustworthy” (a significant factor in me leaving) as different from “OpenAI having bad safety policies” (not a significant factor in me leaving). Not sure if it matters, I expect that Scott was using “safety policies” more expansively than I do. But just for the sake of clarity:
I am generally pretty sympathetic to the idea that it’s really hard to know what safety policies to put in place right now. Many policies pushed by safety people (including me, in the past) have been mostly kayfabe (e.g. being valuable as costly signals, not on the object level). There are a few object-level safety policies that I really wish OpenAI would do right now (most clearly, implementing better security measures) but I didn’t leave because of that (if I had, I would have tried harder to check before I left what security measures OpenAI did have, made specific objections internally about them before I left, etc).
This may just be a semantic disagreement, it seems very reasonable to define “don’t make employees sign non-disparagements” as a safety policy. But in my mind at least stuff like that is more of a lab governance policy (or maybe a meta-level safety policy).
While this isn’t amazing evidence, my sense is there have been around 6 people who quit who in-parallel to them announcing their leave called out OpenAI’s reckless attitude towards risk (at various levels of explicitness, but quite strongly in all cases by standard professional norms).
It’s hard to say that people quit “to protest safety policies”, but they definitely used their leaving to protest safety policies. My sense is almost everyone who left in the last year (Daniel, William, Richard, Steven Adler, Miles) did so with a pretty big public message.
Also Rosie Campbell https://x.com/RosieCampbell/status/1863017727063113803
I don’t think Miles’ or Richard’s stated reasons for resigning included safety policies, for example.
But my broader point is that “fewer safety people should quit leading labs to protest poor safety policies” is basically a non-sequitor from “people have quit leading labs because they think they’ll be more effective elsewhere”, whether because they want to do something different or independent, or because they no longer trust the lab to behave responsibly.
Hmm, you have a very different read of Richard’s message than I do. I agree Miles’ statement did not reason through safety policies, but IMO his blogging since then has included a lot of harsh words for OpenAI, in a way that at least to me made the connection clear (and I think also to many others, but IDK, it’s still doing some tea-leaf reading).
FWIW I think of “OpenAI leadership being untrustworthy” (a significant factor in me leaving) as different from “OpenAI having bad safety policies” (not a significant factor in me leaving). Not sure if it matters, I expect that Scott was using “safety policies” more expansively than I do. But just for the sake of clarity:
I am generally pretty sympathetic to the idea that it’s really hard to know what safety policies to put in place right now. Many policies pushed by safety people (including me, in the past) have been mostly kayfabe (e.g. being valuable as costly signals, not on the object level). There are a few object-level safety policies that I really wish OpenAI would do right now (most clearly, implementing better security measures) but I didn’t leave because of that (if I had, I would have tried harder to check before I left what security measures OpenAI did have, made specific objections internally about them before I left, etc).
This may just be a semantic disagreement, it seems very reasonable to define “don’t make employees sign non-disparagements” as a safety policy. But in my mind at least stuff like that is more of a lab governance policy (or maybe a meta-level safety policy).
(I meant the more expansive definition. Plausible that me and Zac talked past each other because of that)