Yup! I just think there’s an unbounded way that a reader could view his comment: “oh! There are no current or future consequences at OAI for those who sign this statement!”
…and I wanted to make the bound explicit: real protections, into the future, can’t plausibly be offered, by anyone. Surely most OAI researchers are thinking ahead enough to feel the pressure of this bound (whether or not it keeps them from signing).
I’m still glad he made this comment, but the Strong Version is obviously beyond his reach to assure.
I agree no one can make absolute guarantees about the future. Also some people may worry about impact in the future if they will work in another place.
This is why I suggest people talk to me if they have concerns.
FWIW I’d probably be down to talk with Boaz about it, if I still worked at OpenAI and were hesitant about signing.
I doubt Boaz would be able to provide assurances against facing retaliation from others though, which is probably the crux for signing.
(To be fair, that is a quite high bar.)
Yup! I just think there’s an unbounded way that a reader could view his comment: “oh! There are no current or future consequences at OAI for those who sign this statement!”
…and I wanted to make the bound explicit: real protections, into the future, can’t plausibly be offered, by anyone. Surely most OAI researchers are thinking ahead enough to feel the pressure of this bound (whether or not it keeps them from signing).
I’m still glad he made this comment, but the Strong Version is obviously beyond his reach to assure.
I agree no one can make absolute guarantees about the future. Also some people may worry about impact in the future if they will work in another place.
This is why I suggest people talk to me if they have concerns.