A LLM can allready read a document, and this would be purely inference, forward propagation. This can be done on TPU only.
Training is different. It usually requires a GPU, or a CPU.
One particular procedure for training Neural Networks is backpropagation of error.
In back propagation : If the NN produces a correct output, error is 0, and weight aren’t updated. There is no reward.
If the NN outputs deviate from a target value, its states is going to be modified. If the weight are (sufficiently) modified, future inference will be different. It’s behavior will be different.
This trained the NN to avoid some behavior, and toward some other.
OK, torture does not necessarily points to the “right” direction. That’s where the analogy break down. It only does when the goal is to get a confession (see The Confession, Arthur London).
If the NN outputs deviate from a target value, its states is going to be modified. If the weight are (sufficiently) modified, future inference will be different. It’s behavior will be different.
This trained the NN to avoid some behavior, and toward some other.
Why on earth would you relate this to torture though, rather than to (say) the everyday experience of looking at a thing and realizing that it’s different from what you expected? The ordinary activity of learning?
Out of all the billions of possible kinds of experience that could happen to a mind, and change that mind, you chose “torture” as an analogy for LLM training.
And I’m saying, no, it’s less like torture than it is like ten thousand everyday things.
Compare to evolution : make copies (reproduction), mutate, select the best performing, repeat. This merely allocates more ressources to the most promising branches.
Or a Solomonoff style induction : just try to find the best data-compressor among all...
> the everyday experience of looking at a thing and realizing that it’s different from what you expected
This souds like being surprised. Surprise add emotional weight to outliers, its more like managing the training data-set.
I assert that it is not similar to torture; it is similar to reading.
I assert this just as strongly and with just as much evidence as you have offered for it being similar to torture.
What evidence would we collect to decide which of us is correct?
A LLM can allready read a document, and this would be purely inference, forward propagation. This can be done on TPU only.
Training is different. It usually requires a GPU, or a CPU.
One particular procedure for training Neural Networks is backpropagation of error.
In back propagation :
If the NN produces a correct output, error is 0, and weight aren’t updated. There is no reward.
If the NN outputs deviate from a target value, its states is going to be modified. If the weight are (sufficiently) modified, future inference will be different. It’s behavior will be different.
This trained the NN to avoid some behavior, and toward some other.
OK, torture does not necessarily points to the “right” direction. That’s where the analogy break down. It only does when the goal is to get a confession (see The Confession, Arthur London).
Is there a word for this ?
Why on earth would you relate this to torture though, rather than to (say) the everyday experience of looking at a thing and realizing that it’s different from what you expected? The ordinary activity of learning?
Out of all the billions of possible kinds of experience that could happen to a mind, and change that mind, you chose “torture” as an analogy for LLM training.
And I’m saying, no, it’s less like torture than it is like ten thousand everyday things.
Why torture?
Only negative feedback ?
Compare to evolution : make copies (reproduction), mutate, select the best performing, repeat. This merely allocates more ressources to the most promising branches.
Or a Solomonoff style induction : just try to find the best data-compressor among all...
> the everyday experience of looking at a thing and realizing that it’s different from what you expected
This souds like being surprised. Surprise add emotional weight to outliers, its more like managing the training data-set.