The main reason I find this kind of thing concerning is that I expect this kind of model to be used as part of a larger system, for example the descendants of systems like SayCan. In that case you have the LLM generate plans in response to situations, break the plans down into smaller steps, and eventually pass the steps to a separate system that translates them to motor actions. When you’re doing chain-of-thought reasoning and explicit planning, some simulacrum layers are collapsed—having the model generate the string “kill this person” can in fact lead to it killing the person.
This would be extremely undignified of course, since the system is plotting to kill you in plain-text natural language. It’s very easy to catch such things with something as simple as an LLM that’s prompted to look at the ongoing chain of thought and check if it’s planning to do anything bad. But you can see how unreliable that is at higher capability levels. And we may even be that undignified in practice, since running a second model on all the outputs ~doubles the compute costs.
The main reason I find this kind of thing concerning is that I expect this kind of model to be used as part of a larger system, for example the descendants of systems like SayCan. In that case you have the LLM generate plans in response to situations, break the plans down into smaller steps, and eventually pass the steps to a separate system that translates them to motor actions. When you’re doing chain-of-thought reasoning and explicit planning, some simulacrum layers are collapsed—having the model generate the string “kill this person” can in fact lead to it killing the person.
This would be extremely undignified of course, since the system is plotting to kill you in plain-text natural language. It’s very easy to catch such things with something as simple as an LLM that’s prompted to look at the ongoing chain of thought and check if it’s planning to do anything bad. But you can see how unreliable that is at higher capability levels. And we may even be that undignified in practice, since running a second model on all the outputs ~doubles the compute costs.