Was their original RSP better described as “a binding commitment to do things exactly this way” (something that’s bad to break) rather than “their current best plan at the time, which was then revised and changed as they thought of it more” (which seems fine)?
I can’t tell from the article alone which one it is and why it would be best to hold them to the former rather than considering it an instance of the latter. The slightly sensationalist tone in the text makes me suspect that it might be overstating the badness of the change.
The original RSP text uses the word commit for the relevant quote. They could have described it differently in the text itself if they weren’t sure about meeting that standard in the future.
IMO it’s less about the object level badness of the change, which is small potatoes compared to many other recent cases from other leading AI companies, but more about the meta level point that commitments that can be changed aren’t worth very much.
I was about to say “fair enough, in that case it would’ve been useful to include that as an explicit quote”… and then I went back to look at the article and saw that you did include it as an explicit quote that I’d just missed. Sorry, my bad.
Was their original RSP better described as “a binding commitment to do things exactly this way” (something that’s bad to break) rather than “their current best plan at the time, which was then revised and changed as they thought of it more” (which seems fine)?
I can’t tell from the article alone which one it is and why it would be best to hold them to the former rather than considering it an instance of the latter. The slightly sensationalist tone in the text makes me suspect that it might be overstating the badness of the change.
The original RSP text uses the word commit for the relevant quote. They could have described it differently in the text itself if they weren’t sure about meeting that standard in the future.
IMO it’s less about the object level badness of the change, which is small potatoes compared to many other recent cases from other leading AI companies, but more about the meta level point that commitments that can be changed aren’t worth very much.
I was about to say “fair enough, in that case it would’ve been useful to include that as an explicit quote”… and then I went back to look at the article and saw that you did include it as an explicit quote that I’d just missed. Sorry, my bad.