i’m away from work now so i cant write much. But last year my MATS group (me, edward, hunar and miles) spent some time trying out Lanham et al. tests (two months?).
Its not easy to use for a few reasons. And I dont think results are easy to interpret. Here are my thoughts. May remember better ones later. Hopefully there can be an improved version of lanham’s methods.
Recap:
Lanham’s method truncates CoT or adds a mistake. It then forces the model to give an answer straightaway.
One big assumption the causality of the cot. This means that in whatever situation you have where the cot is not causal, you can never “prove” faithfulness. So, you need to search for questions which the model can solve with cot, but cannot solve in non-cot setup. For example, its impossible to prove the faithfulness of questions like “10 + 10″, because the model can solve it in a non-cot fashion. This requirement of casuality seems undesirable—you want to prove faithfulness even in areas where the model can do it in non-cot. If my LLM is scheming and lying to me about “10 + 10”, i still want to be able to prove its faithfulness!
as an implementation detail it can be annoying to implement because models have different non-cot capabilities.
Adding a mistake to the CoT, then forcing the model to answer is quite an out of distribution behavior. What kind of mistake do you want to add? It seems that sonetimes models ignore models as part of their normal behavior. Maybe they would even articulate that “Oh actually i made a mistake, and am going to ignore it and not change my answer”. But because we force the model to give an answer straightaway, the model has no chance to articulate what it is doing. And since the model does not switch its answer, we deem the CoT as “unfaithful” even though it would have articulate it under normal situations.
thanks!
i’m away from work now so i cant write much. But last year my MATS group (me, edward, hunar and miles) spent some time trying out Lanham et al. tests (two months?).
Its not easy to use for a few reasons. And I dont think results are easy to interpret. Here are my thoughts. May remember better ones later. Hopefully there can be an improved version of lanham’s methods.
Recap:
Lanham’s method truncates CoT or adds a mistake. It then forces the model to give an answer straightaway.
One big assumption the causality of the cot. This means that in whatever situation you have where the cot is not causal, you can never “prove” faithfulness. So, you need to search for questions which the model can solve with cot, but cannot solve in non-cot setup. For example, its impossible to prove the faithfulness of questions like “10 + 10″, because the model can solve it in a non-cot fashion. This requirement of casuality seems undesirable—you want to prove faithfulness even in areas where the model can do it in non-cot. If my LLM is scheming and lying to me about “10 + 10”, i still want to be able to prove its faithfulness!
as an implementation detail it can be annoying to implement because models have different non-cot capabilities.
Adding a mistake to the CoT, then forcing the model to answer is quite an out of distribution behavior. What kind of mistake do you want to add? It seems that sonetimes models ignore models as part of their normal behavior. Maybe they would even articulate that “Oh actually i made a mistake, and am going to ignore it and not change my answer”. But because we force the model to give an answer straightaway, the model has no chance to articulate what it is doing. And since the model does not switch its answer, we deem the CoT as “unfaithful” even though it would have articulate it under normal situations.