Another disanalogy is in how GPT-4 writes novel quines without thinking out loud in the context window. It still needs to plan it, so the planning probably happens with layers updating the residual stream, the way it could’ve happened with thinking step by step, but using the inscrutable states of the network instead of tokens. Thinking step by step in tokens imitates humans from its training data, but who knows how the thinking step by step in the residual stream works.
Thus shoggoths might be the first to wake up, because models might already be training on this hypothetical alien deliberation in the residual stream, while human-imitating deliberation with generated tokens is still not being plugged back into the model as training data. This hypothesis also predicts future LLMs that are broadly trained the same as modern LLMs, still look non-agentic and situationally unaware like modern LLMs, but start succeeding in discussing advanced mathematics, because the necessary process of studying it (inventing and solving of exercises that are not already in the training set) might happen by alien deliberation within the residual stream during the training process, while SSL looks at episodes that involve related theory.
Another disanalogy is in how GPT-4 writes novel quines without thinking out loud in the context window. It still needs to plan it, so the planning probably happens with layers updating the residual stream, the way it could’ve happened with thinking step by step, but using the inscrutable states of the network instead of tokens. Thinking step by step in tokens imitates humans from its training data, but who knows how the thinking step by step in the residual stream works.
Thus shoggoths might be the first to wake up, because models might already be training on this hypothetical alien deliberation in the residual stream, while human-imitating deliberation with generated tokens is still not being plugged back into the model as training data. This hypothesis also predicts future LLMs that are broadly trained the same as modern LLMs, still look non-agentic and situationally unaware like modern LLMs, but start succeeding in discussing advanced mathematics, because the necessary process of studying it (inventing and solving of exercises that are not already in the training set) might happen by alien deliberation within the residual stream during the training process, while SSL looks at episodes that involve related theory.