I don’t think I am. (“conditioned future informativity”—informativity for what? …the next/last token, which is the only thing taken into account by a causal loss which masks out the rest—that’s the definition of it! everything else like packing or doing all the sub-sequences is an optimization and doesn’t change the objective.) But feel free to expand on it and explain how the tail wags the dog in causal/decoder Transformers.
You’re at token i in a non-final layer. Which token’s output are you optimizing for? i+1?
By construction a decoder-only transformer is agnostic over what future token it should be informative to within the context limit, except in the sense that it doesn’t need to represent detail that will be more cheaply available from future tokens.
As a transformer is also unrolled in the context dimension, the architecture itself is effectively required to be generic both in what information it gathers and where that information is used. Bias towards next token prediction is not so much a consequence of reward in isolation, but of competitive advantage: at position i, the network has an advantage in predicting i+1 over the network at previous locations by having more recent tokens, and an advantage over the network at future tokens by virtue of still needing to predict token i+1. However, if a token is more predictive of some abstract future token than the next token precisely, say it’s a name that might be referenced later, one would expect the dominant learnt effect to be non-myopically optimizing for later use in some timestamp-invariant way.
You’re at token i in a non-final layer. Which token’s output are you optimizing for? i+1?
I already addressed this point. If I’m in a non-final layer then I can be optimizing for arbitrary tokens within the context window, sure, and ‘effectively’ predicting intermediate tokens because that is the ‘dominant’ effect at that location… insofar as it is instrumentally useful for predicting the final token using the final layer. Because that is where all the gradients flow from, and why the dog wags the tail.
There is no ‘the final token’ for weights not at the final layer.
Because that is where all the gradients flow from, and why the dog wags the tail.
Aggregations of things need not be of the same kind as their constituent things? This is a lot like calling an LLM an activation optimizer. While strictly in some sense true of the pieces that make up the training regime, it’s also kind of a wild way to talk about things in the context of ascribing motivation to the resulting network.
I think maybe you’re intending ‘next token prediction’ to mean something more like ‘represents the data distribution, as opposed to some metric on the output’, but if you are this seems like a rather unclear way of stating it.
I don’t think I am. (“conditioned future informativity”—informativity for what? …the next/last token, which is the only thing taken into account by a causal loss which masks out the rest—that’s the definition of it! everything else like packing or doing all the sub-sequences is an optimization and doesn’t change the objective.) But feel free to expand on it and explain how the tail wags the dog in causal/decoder Transformers.
You’re at token i in a non-final layer. Which token’s output are you optimizing for? i+1?
By construction a decoder-only transformer is agnostic over what future token it should be informative to within the context limit, except in the sense that it doesn’t need to represent detail that will be more cheaply available from future tokens.
As a transformer is also unrolled in the context dimension, the architecture itself is effectively required to be generic both in what information it gathers and where that information is used. Bias towards next token prediction is not so much a consequence of reward in isolation, but of competitive advantage: at position i, the network has an advantage in predicting i+1 over the network at previous locations by having more recent tokens, and an advantage over the network at future tokens by virtue of still needing to predict token i+1. However, if a token is more predictive of some abstract future token than the next token precisely, say it’s a name that might be referenced later, one would expect the dominant learnt effect to be non-myopically optimizing for later use in some timestamp-invariant way.
I already addressed this point. If I’m in a non-final layer then I can be optimizing for arbitrary tokens within the context window, sure, and ‘effectively’ predicting intermediate tokens because that is the ‘dominant’ effect at that location… insofar as it is instrumentally useful for predicting the final token using the final layer. Because that is where all the gradients flow from, and why the dog wags the tail.
There is no ‘the final token’ for weights not at the final layer.
Aggregations of things need not be of the same kind as their constituent things? This is a lot like calling an LLM an activation optimizer. While strictly in some sense true of the pieces that make up the training regime, it’s also kind of a wild way to talk about things in the context of ascribing motivation to the resulting network.
I think maybe you’re intending ‘next token prediction’ to mean something more like ‘represents the data distribution, as opposed to some metric on the output’, but if you are this seems like a rather unclear way of stating it.