Overall I think this is cool work and I’m glad you did it! And I’d be keen to discuss follow-up work ideas with y’all if you are interested:)
Some critical remarks: - I’d be substantially more interested in the setting where the monitor doesn’t have access to the CoT. IMO, projects like yours (i.e. attempts to uplift trusted monitoring) should ideally try to make some conservative assumptions to remain relevant for when things get scarier, one of them being that the scheming model isn’t openly/legibly scheming. - Separately, I think you should be more upfront about this earlier in this post, since monitors with CoT access are a very non-central example of black-box monitors, if at all. (Sure, they’re not white-box in the sense of accessing model weights, but in spirit accessing the CoT is not exactly black-box either).
(Edits: Merged my comments into one and lightly edited style)
Appreciate the feedback and definitely keen to discuss follow ups—we’ll drop you a message. A few thoughts in response to your points:
Our MVP monitor is trained on transcripts that include CoT, but we’re not currently committed to building “thoughts + actions” monitors over “action-only” ones. We’re keeping both options open for now and are currently running the “action-only” experiments mentioned in our “immediate priorities” section. We’re still unsure which direction would be most impactful.
Since models might start reasoning internally soon, it might make sense to already explore monitors that don’t rely on seeing their thoughts explicitly. On the other hand, CoT could help us catch them in cases where models are able to scheme but might still have to rely heavily on explicit reasoning traces to do so. It sounds like you’re placing more weight on the former, which is interesting.
We’ve made it clearer in the blog post that although we’re currently using CoT, we’re considering both approaches.
Overall I think this is cool work and I’m glad you did it! And I’d be keen to discuss follow-up work ideas with y’all if you are interested:)
Some critical remarks:
- I’d be substantially more interested in the setting where the monitor doesn’t have access to the CoT. IMO, projects like yours (i.e. attempts to uplift trusted monitoring) should ideally try to make some conservative assumptions to remain relevant for when things get scarier, one of them being that the scheming model isn’t openly/legibly scheming.
- Separately, I think you should be more upfront about this earlier in this post, since monitors with CoT access are a very non-central example of black-box monitors, if at all. (Sure, they’re not white-box in the sense of accessing model weights, but in spirit accessing the CoT is not exactly black-box either).
(Edits: Merged my comments into one and lightly edited style)
Appreciate the feedback and definitely keen to discuss follow ups—we’ll drop you a message.
A few thoughts in response to your points:
Our MVP monitor is trained on transcripts that include CoT, but we’re not currently committed to building “thoughts + actions” monitors over “action-only” ones. We’re keeping both options open for now and are currently running the “action-only” experiments mentioned in our “immediate priorities” section. We’re still unsure which direction would be most impactful.
Since models might start reasoning internally soon, it might make sense to already explore monitors that don’t rely on seeing their thoughts explicitly. On the other hand, CoT could help us catch them in cases where models are able to scheme but might still have to rely heavily on explicit reasoning traces to do so. It sounds like you’re placing more weight on the former, which is interesting.
We’ve made it clearer in the blog post that although we’re currently using CoT, we’re considering both approaches.