Maybe so, I don’t think it would be wrong to do that. Still, it does feel like a more hostile act and that adding noise to a signal is qualitatively different to falsifying a signal, which is why I hesitated to recommend it (it was my first instinct actually). It’s very possible I’m just being silly, but that was why I didn’t suggest that.
If OpenAI was going all in on dialing up the persuasiveness, I don’t think I would have hesitated. But they’ve earned a bit of good will from me on this very specific dimension by making the ChatGPT 5 models significantly less bad in this respect.
Maybe so, I don’t think it would be wrong to do that. Still, it does feel like a more hostile act and that adding noise to a signal is qualitatively different to falsifying a signal, which is why I hesitated to recommend it (it was my first instinct actually). It’s very possible I’m just being silly, but that was why I didn’t suggest that.
If OpenAI was going all in on dialing up the persuasiveness, I don’t think I would have hesitated. But they’ve earned a bit of good will from me on this very specific dimension by making the ChatGPT 5 models significantly less bad in this respect.