CoT monitoring seems like a great control method when available, but I think it’s reasonably likely that it won’t work on the AIs that we’d want to control, because those models will have access to some kind of “neuralese” that allows them to reason in ways we can’t observe.
Small point, but I think that “neuralese” is likely to be somewhat interpretable, still. 1. We might advance at regular LLM interpretability, in which case those lessons might apply. 2. We might pressure LLM systems to only use CoT neuralese that we can inspect.
There’s also a question of how much future LLM agents will rely on CoT vs. more regular formats for storage. For example, I believe that a lot of agents now are saving information in English into knowledge bases of different kinds. It’s far easier for software people working with complex LLM workflows to make sure a lot of the intermediate formats are in languages they can understand.
All that said, personally, I’m excited for a multi-layered approach, especially at this point when it seems fairly early.
Small point, but I think that “neuralese” is likely to be somewhat interpretable, still.
1. We might advance at regular LLM interpretability, in which case those lessons might apply.
2. We might pressure LLM systems to only use CoT neuralese that we can inspect.
There’s also a question of how much future LLM agents will rely on CoT vs. more regular formats for storage. For example, I believe that a lot of agents now are saving information in English into knowledge bases of different kinds. It’s far easier for software people working with complex LLM workflows to make sure a lot of the intermediate formats are in languages they can understand.
All that said, personally, I’m excited for a multi-layered approach, especially at this point when it seems fairly early.