I think this is along the right sort of lines. Indeed I think this plan is the sort of thing I hoped to prompt people to think about with the post. But I think there are a few things wrong with it:
i think premise 1 is big if true, but I think I doubt that it is at easy as this: see the deepmind fact-finding sequence for some counter-evidence. It’s also easy to imagine this being true for some categories of static facts about the external world (e.g paris being in france) but you need to be careful about extending this to the category of all propositional statements (e.g the model thinks that this safeguard is adequate, or the model can’t find any security flaws in this program).
relatedly, your second bullet point assumes that you can identify the ‘fact’ related to what the model is currently outputing unambiguously, and look it up in the model; does this require you to find all the fact representations in advance, or is this computed on-the-fly?
I think that detecting/preventing models from knowingly lying would be a good research direction and it’s clearly related to strategic deception, but I’m not actually sure that it’s a superset (consider a case when I’m bullshitting you rather than lying; I predict what you want to hear me say and I say it, and I don’t know or care whether what I’m saying is true or false or whatever).
but yeah I think this is a reasonable sort of thing to try, but I think you need to do a lot of work to convince me of premise 1, and indeed I think I doubt premise 1 is true a priori though I am open to persuasion on this. Note that premise 1 being true of some facts is a very different claim to it being true of every fact!
I think it’s important to push back against the assumption that this will always happen, or that something like the refusal direction has to exist for every possible state of interest.
and to expand on this a little bit more: it seems important that we hedge against this possibility by at least spending a bit of time thinking about plans that don’t rhyme with ‘I sure hope everything turns out to be a simple correspondence’! I think eleni and i feel that this is a suprisingly widespread move in interpretability plans, which is maybe why some of the post is quite forceful in arguing against it
I agree with Lewis. A few clarificatory thoughts. 1. I think that the point of calling it a category mistake is exactly about expecting a “nice simple description”. It will be something within the network, but there’s no reason to believe that this something will be a single neural analog. 2. Even if there are many single neural analogs, there’s no reason to expect that all the safety-relevant properties will have them. 3. Even if all the safety-relevant properties have them, there’s no reason to believe (at least for now) that we have the interp tools to find them in time i.e., before having systems fully capable of pulling off a deception plan. So, even if you don’t buy 1+2, from 3 it follows that we have to figure this out beforehand. I’m also worried that claims such as “we can make important forward progress on particular intentional states even in the absence of such a general account.” could further lead to a slippery slope that more or less embraces having the dangerous thing first without sufficient precautions (not saying you’re in favor of that, though), especially since many of the safety-relevant states seem to be interconnected.
Can you clarify what you mean by ‘neural analog’ / ‘single neural analog’? Is that meant as another term for what the post calls ‘simple correspondences’?
Even if all the safety-relevant properties have them, there’s no reason to believe (at least for now) that we have the interp tools to find them in time i.e., before having systems fully capable of pulling off a deception plan.
Agreed. I’m hopeful that perhaps mech interp will continue to improve and be automated fast enough for that to work, but I’m skeptical that that’ll happen. Or alternately I’m hopeful that we turn out to be in an easy-mode world where there is something like a single ‘deception’ direction that we can monitor, and that’ll at least buy us significant time before it stops working on more sophisticated systems (plausibly due to optimization pressure / selection pressure if nothing else).
I’m also worried that claims such as “we can make important forward progress on particular intentional states even in the absence of such a general account.” could further lead to a slippery slope that more or less embraces having the dangerous thing first without sufficient precautions
I agree that that’s a real risk; it makes me think of Andreessen Horowitz and others claiming in an open letter that interpretability had basically been solved and so AI regulation isn’t necessary. On the other hand, it seems better to state our best understanding plainly, even if others will slippery-slope it, than to take the epistemic hit of shifting our language in the other direction to compensate.
i think premise 1 is big if true, but I think I doubt that it is at easy as this: see the deepmind fact-finding sequence for some counter-evidence.
I haven’t read that sequence, I’ll check it out, thanks. I’m thinking of work like the ROME paper from David Bau’s lab that suggest that fact storage can be identified and edited, and various papers like this one from Mor Geva+ that find evidence that the MLP layers in LLMs are largely key-value stores.
relatedly, your second bullet point assumes that you can identify the ‘fact’ related to what the model is currently outputing unambiguously, and look it up in the model; does this require you to find all the fact representations in advance, or is this computed on-the-fly?
It does seem like a naive approach would require pre-identifying all facts you wanted to track. On the other hand, I can imagine an approach like analyzing the output for factual claims and then searching for those in the record of activations during the output. Not sure, seems very TBD.
I think that detecting/preventing models from knowingly lying would be a good research direction and it’s clearly related to strategic deception, but I’m not actually sure that it’s a superset (consider a case when I’m bullshitting you rather than lying; I predict what you want to hear me say and I say it, and I don’t know or care whether what I’m saying is true or false or whatever).
Great point! I can certainly imagine that there could be cases like that, although I can equally imagine that LLMs could be consistently tracking the truth value of claims even if that isn’t a big factor determining the output.
but yeah I think this is a reasonable sort of thing to try, but I think you need to do a lot of work to convince me of premise 1, and indeed I think I doubt premise 1 is true a priori though I am open to persuasion on this. Note that premise 1 being true of some facts is a very different claim to it being true of every fact!
That seems reasonable. I’ve mostly had the impression that 1 has generally been true in specific cases where researchers have looked for it, but it’s definitely not something I’ve specifically gone looking for. I’ll be interested to read the sequence from DeepMind.
I think this is along the right sort of lines. Indeed I think this plan is the sort of thing I hoped to prompt people to think about with the post. But I think there are a few things wrong with it:
i think premise 1 is big if true, but I think I doubt that it is at easy as this: see the deepmind fact-finding sequence for some counter-evidence. It’s also easy to imagine this being true for some categories of static facts about the external world (e.g paris being in france) but you need to be careful about extending this to the category of all propositional statements (e.g the model thinks that this safeguard is adequate, or the model can’t find any security flaws in this program).
relatedly, your second bullet point assumes that you can identify the ‘fact’ related to what the model is currently outputing unambiguously, and look it up in the model; does this require you to find all the fact representations in advance, or is this computed on-the-fly?
I think that detecting/preventing models from knowingly lying would be a good research direction and it’s clearly related to strategic deception, but I’m not actually sure that it’s a superset (consider a case when I’m bullshitting you rather than lying; I predict what you want to hear me say and I say it, and I don’t know or care whether what I’m saying is true or false or whatever).
but yeah I think this is a reasonable sort of thing to try, but I think you need to do a lot of work to convince me of premise 1, and indeed I think I doubt premise 1 is true a priori though I am open to persuasion on this. Note that premise 1 being true of some facts is a very different claim to it being true of every fact!
and to expand on this a little bit more: it seems important that we hedge against this possibility by at least spending a bit of time thinking about plans that don’t rhyme with ‘I sure hope everything turns out to be a simple correspondence’! I think eleni and i feel that this is a suprisingly widespread move in interpretability plans, which is maybe why some of the post is quite forceful in arguing against it
I agree with Lewis. A few clarificatory thoughts. 1. I think that the point of calling it a category mistake is exactly about expecting a “nice simple description”. It will be something within the network, but there’s no reason to believe that this something will be a single neural analog. 2. Even if there are many single neural analogs, there’s no reason to expect that all the safety-relevant properties will have them. 3. Even if all the safety-relevant properties have them, there’s no reason to believe (at least for now) that we have the interp tools to find them in time i.e., before having systems fully capable of pulling off a deception plan. So, even if you don’t buy 1+2, from 3 it follows that we have to figure this out beforehand. I’m also worried that claims such as “we can make important forward progress on particular intentional states even in the absence of such a general account.” could further lead to a slippery slope that more or less embraces having the dangerous thing first without sufficient precautions (not saying you’re in favor of that, though), especially since many of the safety-relevant states seem to be interconnected.
Can you clarify what you mean by ‘neural analog’ / ‘single neural analog’? Is that meant as another term for what the post calls ‘simple correspondences’?
Agreed. I’m hopeful that perhaps mech interp will continue to improve and be automated fast enough for that to work, but I’m skeptical that that’ll happen. Or alternately I’m hopeful that we turn out to be in an easy-mode world where there is something like a single ‘deception’ direction that we can monitor, and that’ll at least buy us significant time before it stops working on more sophisticated systems (plausibly due to optimization pressure / selection pressure if nothing else).
I agree that that’s a real risk; it makes me think of Andreessen Horowitz and others claiming in an open letter that interpretability had basically been solved and so AI regulation isn’t necessary. On the other hand, it seems better to state our best understanding plainly, even if others will slippery-slope it, than to take the epistemic hit of shifting our language in the other direction to compensate.
I haven’t read that sequence, I’ll check it out, thanks. I’m thinking of work like the ROME paper from David Bau’s lab that suggest that fact storage can be identified and edited, and various papers like this one from Mor Geva+ that find evidence that the MLP layers in LLMs are largely key-value stores.
It does seem like a naive approach would require pre-identifying all facts you wanted to track. On the other hand, I can imagine an approach like analyzing the output for factual claims and then searching for those in the record of activations during the output. Not sure, seems very TBD.
Great point! I can certainly imagine that there could be cases like that, although I can equally imagine that LLMs could be consistently tracking the truth value of claims even if that isn’t a big factor determining the output.
That seems reasonable. I’ve mostly had the impression that 1 has generally been true in specific cases where researchers have looked for it, but it’s definitely not something I’ve specifically gone looking for. I’ll be interested to read the sequence from DeepMind.