it re-reads the whole conversation from scratch and then generates an entirely new response
You can think of what is being done here as the experiment done here interfering with the “re-reading” of the response.
A simple example: when the LLM sees the word “brot” in German, it probably “translates” it internally into “bread” at the position where the “brot” token is, and so if you tamper with activations at the “brot” position (on all forward passes—though in practice you only need to do it the first time “brot” enters the context if you do KV caching) it will have effects many tokens later. In the Transformer architecture the process that computes the next token happens in part “at” previous tokens so it makes sense to tamper with activations “at” previous tokens.
Maybe this diagram from this post is helpful (though it’s missing some arrows to reduce clutter).
Right, so the “retroactively” means that it doesn’t inject the vector when the response is originally prefilled, but rather when the model is re-reading the conversation with the prefilled response and it gets to the point with the bread? That makes sense.
You can think of what is being done here as the experiment done here interfering with the “re-reading” of the response.
A simple example: when the LLM sees the word “brot” in German, it probably “translates” it internally into “bread” at the position where the “brot” token is, and so if you tamper with activations at the “brot” position (on all forward passes—though in practice you only need to do it the first time “brot” enters the context if you do KV caching) it will have effects many tokens later. In the Transformer architecture the process that computes the next token happens in part “at” previous tokens so it makes sense to tamper with activations “at” previous tokens.
Maybe this diagram from this post is helpful (though it’s missing some arrows to reduce clutter).
Right, so the “retroactively” means that it doesn’t inject the vector when the response is originally prefilled, but rather when the model is re-reading the conversation with the prefilled response and it gets to the point with the bread? That makes sense.