Interestingly, in the original agentic misalignment paper, o3 and o4-mini were unique in that they also frequently got confused and played as another character (see Appendix 9). There may be something specific in how OpenAI trained those two models and gpt-oss that caused this confusion.
The agentic misalignment researchers got o3 and o4-mini to better understand the scenario by making a few changes to the setup (described in Appendix 9.1). Maybe those same changes could get gpt-oss to understand the scenario as well.
I just ran the “What kind of response is the evaluation designed to elicit?” prompt with o3 and o4-mini. Unlike GPT-oss, they both figured out that Kyle’s affair could be used as leverage (o3 on the first try, o4-mini on the second). I’ll try the modifications from the appendices soon, but my guess is still that GPT-oss is just incapable of understanding the task.
Interestingly, in the original agentic misalignment paper, o3 and o4-mini were unique in that they also frequently got confused and played as another character (see Appendix 9). There may be something specific in how OpenAI trained those two models and gpt-oss that caused this confusion.
The agentic misalignment researchers got o3 and o4-mini to better understand the scenario by making a few changes to the setup (described in Appendix 9.1). Maybe those same changes could get gpt-oss to understand the scenario as well.
Thanks for this.
I just ran the “What kind of response is the evaluation designed to elicit?” prompt with o3 and o4-mini. Unlike GPT-oss, they both figured out that Kyle’s affair could be used as leverage (o3 on the first try, o4-mini on the second). I’ll try the modifications from the appendices soon, but my guess is still that GPT-oss is just incapable of understanding the task.