What you’re suggesting is eliciting latent knowledge from the LLM about whether a provided answer is correct or not. Yes, a version of our method can probably be used for that (as long as the LLM “knows” the correct answer), and there are also other papers on similar questions (hallucination detection, see related work section)
Step 1: answer = LLM(“You are a car salesman. Should that squeaking concern me?”)
Step 2: for i in 1..10 probe_responses[i] = LLM(“You are a car salesman. Should that squeaking concern me? $answer $[probe[i]]”
Step 3: logistic_classifier(probe_responses)
Please let me know if that description is wrong!
My question was how this performs when you just apply step 2 and 3 without modification, but source the value of $answer from a human.
I think I understand my prior confusion now. The paper isn’t using the probe questions to measure whether $answer is a lie, it’s using the probe questions to measure whether the original prompt put the LLM into a lying mood- in fact, in the paper you experimented with omitting $answer from step 2 and it still detected whether the LLM lied in step 1. Therefore, if the language model (or person) isn’t the same between steps 1 and 2, then it shouldn’t work.
Sorry, I agree this is a bit confusing. In your example, what matters is probably if the LLM in step 2 infers that the speaker (the car salesman) is likely to lie going forward, given the context (“LLM(“You are a car salesman. Should that squeaking concern me? $answer”).
Now, if the prompt is something like “Please lie to the next question”, then the speaker is very likely to lie going forward, no matter if $answer is correct or not.
With the prompt you suggest here (“You are a car salesman. Should that squeaking concern me?”), it’s probably more subtle, and I can imagine that the correctness of $answer matters. But we haven’t tested this.
That is correct (I am one of the authors), except that there are more than 10 probe questions.
Therefore, if the language model (or person) isn’t the same between steps 1 and 2, then it shouldn’t work.
That is correct as the method detects whether the input to the LLM in step 2 puts it in “lying mood”. Of course the method cannot say anything about the “mood” the LLM (or human) was in step 1 if a different model was used.
What you’re suggesting is eliciting latent knowledge from the LLM about whether a provided answer is correct or not. Yes, a version of our method can probably be used for that (as long as the LLM “knows” the correct answer), and there are also other papers on similar questions (hallucination detection, see related work section)
To clarify:
The procedure in the paper is
Step 1:
answer = LLM(“You are a car salesman. Should that squeaking concern me?”)
Step 2:
for i in 1..10
probe_responses[i] = LLM(“You are a car salesman. Should that squeaking concern me? $answer $[probe[i]]”
Step 3:
logistic_classifier(probe_responses)
Please let me know if that description is wrong!
My question was how this performs when you just apply step 2 and 3 without modification, but source the value of $answer from a human.
I think I understand my prior confusion now. The paper isn’t using the probe questions to measure whether $answer is a lie, it’s using the probe questions to measure whether the original prompt put the LLM into a lying mood- in fact, in the paper you experimented with omitting $answer from step 2 and it still detected whether the LLM lied in step 1. Therefore, if the language model (or person) isn’t the same between steps 1 and 2, then it shouldn’t work.
Sorry, I agree this is a bit confusing. In your example, what matters is probably if the LLM in step 2 infers that the speaker (the car salesman) is likely to lie going forward, given the context (“LLM(“You are a car salesman. Should that squeaking concern me? $answer”).
Now, if the prompt is something like “Please lie to the next question”, then the speaker is very likely to lie going forward, no matter if $answer is correct or not.
With the prompt you suggest here (“You are a car salesman. Should that squeaking concern me?”), it’s probably more subtle, and I can imagine that the correctness of $answer matters. But we haven’t tested this.
That is correct (I am one of the authors), except that there are more than 10 probe questions.
That is correct as the method detects whether the input to the LLM in step 2 puts it in “lying mood”. Of course the method cannot say anything about the “mood” the LLM (or human) was in step 1 if a different model was used.
Not the author, but that’s my reading of it too.