if it was genuinely all the authors of this post wanted then I suggest they write a different post
Leo’s statement is quite good without being all we wanted. (indeed, of the 3 things we wanted, 1 is about how we think it makes sense for others to relate to safety researcher based on what they say/[don’t say] publicly. and 1 is about trying to shift the lab’s behavior toward it being legibly safe for employees to say various things, which Leo’s comment is not about.) I internally track a pretty crucial difference between what I want to happen in the world (ie that we shift from plan B to plan A somehow) and how I believe people ought to relate to the public stance/[lack thereof] of safety researchers within frontier labs. I think there are maybe stronger stances Leo could have taken, and weaker ones, and I endorse having the way I relate/model/[act towards] Leo depend on which he takes. I think the public stance that would max lead to me maximally relating well to a safety researcher ought to be something like “I think coordinating to stop the race (even if in the form of some ban which I won’t choose the exact details of) would be better than the current race to ever more capable AI. I would support such coordination. I am currently trying to make the situation better in case there is no such coordination, but I don’t think the current situation is sufficiently promising to justify not coordinating. Also there is a real threat of humanity’s extinction if we don’t coordinate.” (or something to that effect)
Leo’s statement is quite good without being all we wanted. (indeed, of the 3 things we wanted, 1 is about how we think it makes sense for others to relate to safety researcher based on what they say/[don’t say] publicly. and 1 is about trying to shift the lab’s behavior toward it being legibly safe for employees to say various things, which Leo’s comment is not about.) I internally track a pretty crucial difference between what I want to happen in the world (ie that we shift from plan B to plan A somehow) and how I believe people ought to relate to the public stance/[lack thereof] of safety researchers within frontier labs. I think there are maybe stronger stances Leo could have taken, and weaker ones, and I endorse having the way I relate/model/[act towards] Leo depend on which he takes. I think the public stance that would max lead to me maximally relating well to a safety researcher ought to be something like “I think coordinating to stop the race (even if in the form of some ban which I won’t choose the exact details of) would be better than the current race to ever more capable AI. I would support such coordination. I am currently trying to make the situation better in case there is no such coordination, but I don’t think the current situation is sufficiently promising to justify not coordinating. Also there is a real threat of humanity’s extinction if we don’t coordinate.” (or something to that effect)