Postscript—in the example they give, the output clearly isn’t only introspection. In particular the model says it ‘read the poem aloud several times’ which, ok, that’s something I am confident that the model can’t do (could it be an analogy? Maaaybe, but it seems like a stretch). My guess is that little or no actual introspection is going on, because LLMs don’t seem to be incentivized to learn to accurately introspect during training. But that’s a guess; I wouldn’t make any claims about it in the absence of empirical evidence.
Postscript—in the example they give, the output clearly isn’t only introspection. In particular the model says it ‘read the poem aloud several times’ which, ok, that’s something I am confident that the model can’t do (could it be an analogy? Maaaybe, but it seems like a stretch). My guess is that little or no actual introspection is going on, because LLMs don’t seem to be incentivized to learn to accurately introspect during training. But that’s a guess; I wouldn’t make any claims about it in the absence of empirical evidence.