In the context of specification gaming, modifying instructions in hindsight based on observed behavior could provide recontextualization-like effects.
Maybe this could be one way to train against a monitor without the bad side effects? If your reward hack monitor flags reward hacking you would add a hack instruction to the prompt. You can also do this for any other bad behavior you can catch with a monitor like deception or sycophancy or uninterpretable CoT.
If you only change the prompts corresponding to the completions flagged by monitors and nothing else, this might capture most of the benefit and have the upside of making your model updates less off-policy and have less of an impact on instruction following. On the other hand, you might miss some kinds of bad behavior and inadvertently reinforce them.
Anthropic says in their system card that Claude Sonnet 3.7 showed raw CoTs, and that Claude Opus 4 and Sonnet 4 show raw CoTs unless the CoT is especially long, in which case it is summarized. They also say this summarization happens about 5% of the time. According to the 4.5 system card, Claude Sonnet 4.5 reasoning text works the same way, but instead of giving a number, they say summarization happens in ‘a very small minority of cases’.
I agree that Anthropic and GDM may be reinforcing legibility in some way given how much more structured their CoTs look.
Claude 4 system card: https://www-cdn.anthropic.com/6d8a8055020700718b0c49369f60816ba2a7c285.pdf#page=8
Claude Sonnet 4.5 system card: https://assets.anthropic.com/m/12f214efcc2f457a/original/Claude-Sonnet-4-5-System-Card.pdf#page=9