I think a bunch of alignment value will/should come from understanding how models work internally—adjudicating between theories like “unitary mesa objectives” and “shards” and “simulators” or whatever—which lets us understand cognition better, which lets us understand both capabilities and alignment better, which indeed helps with capabilities as well as with alignment.
But, we’re just going to die in alignment-hard worlds if we don’t do anything, and it seems implausible that we can solve alignment in alignment-hard worlds by not understanding internals or inductive biases but instead relying on shallowly observable in/out behavior. EG I don’t think loss function gymnastics will help you in those worlds. Credence:75% you have to know something real about how loss provides cognitive updates.
So in those worlds, it comes down to questions of “are you getting the most relevant understanding per unit time”, and not “are you possibly advancing capabilities.” And, yes, often motivated-reasoning will whisper the former when you’re really doing the latter. That doesn’t change the truth of the first sentence.
I agree with this. I think people are bad at running that calculation, and consciously turning down status in general, so I advocate for this position because I think its basically true for many.
Most mechanistic interpretability is not in fact focused on the specific sub-problem you identify, its wandering around in a billion-parameter maze, taking note of things that look easy & interesting to understand, and telling people to work on understanding those things. I expect this to produce far more capabilities relevant insights than alignment relevant insights, especially when compared to worlds where Neel et al went in with the sole goal of separating out theories of value formation, and then did nothing else.
There’s a case to be made for exploration, but the rules of the game get wonky when you’re trying to do differential technological development. There becomes strategically relevant information you want to not know.
I expect this to produce far more capabilities relevant insights than alignment relevant insights, especially when compared to worlds where Neel et al went in with the sole goal of separating out theories of value formation, and then did nothing else.
I assume here you mean something like: given how most MI projects seem to be done, the most likely output of all these projects will be concrete interventions to make it easier for a model to become more capable, and these concrete interventions will have little to no effect on making it easier for us to direct a model towards having the ‘values’ we want it to have.
I think a bunch of alignment value will/should come from understanding how models work internally—adjudicating between theories like “unitary mesa objectives” and “shards” and “simulators” or whatever—which lets us understand cognition better, which lets us understand both capabilities and alignment better, which indeed helps with capabilities as well as with alignment.
But, we’re just going to die in alignment-hard worlds if we don’t do anything, and it seems implausible that we can solve alignment in alignment-hard worlds by not understanding internals or inductive biases but instead relying on shallowly observable in/out behavior. EG I don’t think loss function gymnastics will help you in those worlds. Credence:75% you have to know something real about how loss provides cognitive updates.
So in those worlds, it comes down to questions of “are you getting the most relevant understanding per unit time”, and not “are you possibly advancing capabilities.” And, yes, often motivated-reasoning will whisper the former when you’re really doing the latter. That doesn’t change the truth of the first sentence.
I agree with this. I think people are bad at running that calculation, and consciously turning down status in general, so I advocate for this position because I think its basically true for many.
Most mechanistic interpretability is not in fact focused on the specific sub-problem you identify, its wandering around in a billion-parameter maze, taking note of things that look easy & interesting to understand, and telling people to work on understanding those things. I expect this to produce far more capabilities relevant insights than alignment relevant insights, especially when compared to worlds where Neel et al went in with the sole goal of separating out theories of value formation, and then did nothing else.
There’s a case to be made for exploration, but the rules of the game get wonky when you’re trying to do differential technological development. There becomes strategically relevant information you want to not know.
I assume here you mean something like: given how most MI projects seem to be done, the most likely output of all these projects will be concrete interventions to make it easier for a model to become more capable, and these concrete interventions will have little to no effect on making it easier for us to direct a model towards having the ‘values’ we want it to have.
I agree with this claim: capabilities generalize very easily, while it seems extremely unlikely for there to be ‘alignment generalization’ in a way that we intend, by default. So the most likely outcome of more MI research does seem to be interventions that remove the obstacles that come in the way of achieving AGI, while not actually making progress on ‘alignment generalization’.
Indeed, this is what I mean.