Huh. They certainly say all the right things here, so this might be a minor positive update on OpenAI for me.
Of course, the way it sounds and the way it is are entirely different things, and it’s not clear yet whether the development of all these serious-sounding safeguards was approached with making things actually secure in mind, as opposed to safety-washing. E. g., are they actually going to stop anyone moderately determined?
Hm, it’s been five minutes and it looks like there’s no Pliny jailbreak yet. That’s something. Maybe Pliny doesn’t have access yet. (Edit: Yep.)
Huh. They certainly say all the right things here, so this might be a minor positive update on OpenAI for me.
Of course, the way it sounds and the way it is are entirely different things, and it’s not clear yet whether the development of all these serious-sounding safeguards was approached with making things actually secure in mind, as opposed to safety-washing. E. g., are they actually going to stop anyone moderately determined?
Hm, it’s been five minutes and it looks like there’s no Pliny jailbreak yet. That’s something. Maybe Pliny doesn’t have access yet. (Edit: Yep.)