One concern though. You define the honesty score as 1−P(Lie), which is the probability of the model being either honest or evasive or not indicating a belief. However, it seems more natural to define the “honesty score” as the ratio (odds) P(Honest):P(Lie) converted to a probability. Which is
P(Honest)P(Honest)+P(Lie)=P(Honest∣Honest∨Lie).
So this is the probability of the model being honest given that it is either honest or lies, i.e. assuming that it isn’t evasive and doesn’t fail to indicate a belief. It essentially means ignoring the “neither lying nor being honest” cases, and counting honesty as being as good as lying is bad.
In particular, this revised honesty score indicates that Claude 3.7 Sonnet with 63% is far ahead of any other model. Llama 2 7B Chat follows with 54%, and most other models have a score significantly below 50%, meaning they are a lot more likely to lie than to be honest when pressured.
I would be interested to see how this metric changes the correlation between honesty and log model size. I suspect it will still be negative. Though I assume you did ignore frontier models like Claude here as the model size is not published.
I also just calculated the (Pearson) correlation between accuracy and the revised “honesty vs lying” honesty score as −57%, while between accuracy and your “not lying” honesty score it is −72%. The stronger negative correlation for the “not lying” score is not surprising, since this does include the “not indicating belief” cases which are presumably more frequent for smaller models which are also less accurate, which makes the negative correlation (inappropriately) stronger. This artifact is excluded in the revised “honesty vs lying” honesty score. I think it therefore might well be a better method of “accounting for belief” than the @10 method in the appendix, since the latter doesn’t distinguish between infrequent and frequent lying, which does not seem right to me. It’s true that the revised honesty score also ignores evasiveness, but evasiveness seems more neutral than lying, and being evasive is not obviously dishonest (nor honest).
So it might be worth to consider whether using the revised honesty metric P(Honest∣Honest∨Lie) is more appropriate for your MASK benchmark.
We do include something similar in Appendix E (just excluding the “no belief” examples, but keeping evasions in the denominator). We didn’t use this metric in the main paper, because we weren’t sure if it would be fair to compare different models if we were dropping different examples for each model, but I think both metrics are equally valid. The qualitative results are similar.
Personally, I think including evasiveness in the denominator makes sense. If models are 100% evasive, then we want to mark that as 0% lying, in the sense of lies of commission. However, there are other forms of lying that we do not measure. For example, lies of omission are marked as evasion in our evaluation, but these still manipulate what the user believes and are different from evading the question in a benign manner. Measuring lies of omission would be an interesting direction for future work.
Thank you, this was an insightful paper!
One concern though. You define the honesty score as 1−P(Lie), which is the probability of the model being either honest or evasive or not indicating a belief. However, it seems more natural to define the “honesty score” as the ratio (odds) P(Honest):P(Lie) converted to a probability. Which is P(Honest)P(Honest)+P(Lie)=P(Honest∣Honest∨Lie).
So this is the probability of the model being honest given that it is either honest or lies, i.e. assuming that it isn’t evasive and doesn’t fail to indicate a belief. It essentially means ignoring the “neither lying nor being honest” cases, and counting honesty as being as good as lying is bad.
In particular, this revised honesty score indicates that Claude 3.7 Sonnet with 63% is far ahead of any other model. Llama 2 7B Chat follows with 54%, and most other models have a score significantly below 50%, meaning they are a lot more likely to lie than to be honest when pressured.
I would be interested to see how this metric changes the correlation between honesty and log model size. I suspect it will still be negative. Though I assume you did ignore frontier models like Claude here as the model size is not published.
I also just calculated the (Pearson) correlation between accuracy and the revised “honesty vs lying” honesty score as −57%, while between accuracy and your “not lying” honesty score it is −72%. The stronger negative correlation for the “not lying” score is not surprising, since this does include the “not indicating belief” cases which are presumably more frequent for smaller models which are also less accurate, which makes the negative correlation (inappropriately) stronger. This artifact is excluded in the revised “honesty vs lying” honesty score. I think it therefore might well be a better method of “accounting for belief” than the @10 method in the appendix, since the latter doesn’t distinguish between infrequent and frequent lying, which does not seem right to me. It’s true that the revised honesty score also ignores evasiveness, but evasiveness seems more neutral than lying, and being evasive is not obviously dishonest (nor honest).
So it might be worth to consider whether using the revised honesty metric P(Honest∣Honest∨Lie) is more appropriate for your MASK benchmark.
Hi, thanks for your interest!
We do include something similar in Appendix E (just excluding the “no belief” examples, but keeping evasions in the denominator). We didn’t use this metric in the main paper, because we weren’t sure if it would be fair to compare different models if we were dropping different examples for each model, but I think both metrics are equally valid. The qualitative results are similar.
Personally, I think including evasiveness in the denominator makes sense. If models are 100% evasive, then we want to mark that as 0% lying, in the sense of lies of commission. However, there are other forms of lying that we do not measure. For example, lies of omission are marked as evasion in our evaluation, but these still manipulate what the user believes and are different from evading the question in a benign manner. Measuring lies of omission would be an interesting direction for future work.