Not having heard back, I’ll go ahead and try to connect what I’m saying to your posts, just to close the mental loop:
It would be mostly reasonable to treat this agenda as being about what’s happening in the second, ‘character’ level of the three-layer model. That said, while I find the three-layer model a useful phenomenological lens, it doesn’t reflect a clean distinction in the model itself; on some level all responses involve all three layers, even if it’s helpful in practice to focus on one at a time. In particular, the base layer is ultimately made up of models of characters, in a Simulators-ish sense (with the Simplex work providing a useful theoretical grounding for that, with ‘characters’ as the distinct causal processes that generate different parts of the training data). Post-training progressively both enriches and centralizes a particular character or superposition of characters, and this agenda tries to investigate that.
The three-layer model doesn’t seem to have much to say (at least in the post) about, at the second level, what distinguishes a context-prompted ephemeral persona from that richer and more persistent character that the model consistently returns to (which is informed by but not identical to the assistant persona), whereas that’s exactly where this agenda is focused. The difference is at least partly quantitative, but it’s the sort of quantitative difference that adds up to a qualitative difference; eg I expect Claude has far more circuitry dedicated to its self-model than to its model of Gandalf. And there may be entirely qualitative differences as well.
With respect to active inference, even if we assume that active inference is a complete account of human behavior, there are still a lot of things we’d want to say about human behavior that wouldn’t be very usefully expressed in active inference terms, for the same reasons that biology students don’t just learn physics and call it a day. As a relatively dramatic example, consider the stories that people tell themselves about who they are—even if that cashes out ultimately into active inference, it makes way more sense to describe it at a different level. I think that the same is true for understanding LLMs, at least until and unless we achieve a complete mechanistic-level understanding of LLMs, and probably afterward as well.
And finally, the three-layer model is, as it says, a phenomenological account, whereas this agenda is at least partly interested in what’s going on in the model’s internals that drives that phenomenology.
“The base layer is ultimately made up of models of characters, in a Simulators-ish sense” No it is not, in a similar way as what your brain is running is not ultimately made of characters. It’s ultimately made of approximate bayesian models.
With respect to active inference … Sorry, don’t want to be offensive, but it would actually be helpful for your project to understand active inference at least a bit. Empirically it seems has-repeatedly-read-Scott-Alexander’s-posts-on-it leads people to some weird epistemic state, in which people seem to have a sense of understanding, but are unable to answer even basic questions, make very easy predictions, etc. I suspect what’s going on is a bit like if someone reads some well written science popularization book about quantum mechanics but actually lacks concepts like complex numbers or vector spaces, they may have somewhat superficial sense of understanding. Obviously active inference has a lot to say about how people self-model themselves. For example, when typing these words, I assume it’s me who types them (and not someone else, for example). Why? That’s actually important question for why self. Why not, or to what extent not in LLMs? How stories that people tell themselves about who they are impact what they do is totally something which makes sense to understand from active inference perspective.
it would actually be helpful for your project to understand active inference at least a bit. Empirically it seems has-repeatedly-read-Scott-Alexander’s-posts-on-it leads people to some weird epistemic state
Fair enough — is there a source you’d most recommend for learning more?
Not having heard back, I’ll go ahead and try to connect what I’m saying to your posts, just to close the mental loop:
It would be mostly reasonable to treat this agenda as being about what’s happening in the second, ‘character’ level of the three-layer model. That said, while I find the three-layer model a useful phenomenological lens, it doesn’t reflect a clean distinction in the model itself; on some level all responses involve all three layers, even if it’s helpful in practice to focus on one at a time. In particular, the base layer is ultimately made up of models of characters, in a Simulators-ish sense (with the Simplex work providing a useful theoretical grounding for that, with ‘characters’ as the distinct causal processes that generate different parts of the training data). Post-training progressively both enriches and centralizes a particular character or superposition of characters, and this agenda tries to investigate that.
The three-layer model doesn’t seem to have much to say (at least in the post) about, at the second level, what distinguishes a context-prompted ephemeral persona from that richer and more persistent character that the model consistently returns to (which is informed by but not identical to the assistant persona), whereas that’s exactly where this agenda is focused. The difference is at least partly quantitative, but it’s the sort of quantitative difference that adds up to a qualitative difference; eg I expect Claude has far more circuitry dedicated to its self-model than to its model of Gandalf. And there may be entirely qualitative differences as well.
With respect to active inference, even if we assume that active inference is a complete account of human behavior, there are still a lot of things we’d want to say about human behavior that wouldn’t be very usefully expressed in active inference terms, for the same reasons that biology students don’t just learn physics and call it a day. As a relatively dramatic example, consider the stories that people tell themselves about who they are—even if that cashes out ultimately into active inference, it makes way more sense to describe it at a different level. I think that the same is true for understanding LLMs, at least until and unless we achieve a complete mechanistic-level understanding of LLMs, and probably afterward as well.
And finally, the three-layer model is, as it says, a phenomenological account, whereas this agenda is at least partly interested in what’s going on in the model’s internals that drives that phenomenology.
“The base layer is ultimately made up of models of characters, in a Simulators-ish sense” No it is not, in a similar way as what your brain is running is not ultimately made of characters. It’s ultimately made of approximate bayesian models.
what distinguishes a context-prompted ephemeral persona from that richer and more persistent character Check Why Simulator AIs want to be Active Inference AIs
With respect to active inference … Sorry, don’t want to be offensive, but it would actually be helpful for your project to understand active inference at least a bit. Empirically it seems has-repeatedly-read-Scott-Alexander’s-posts-on-it leads people to some weird epistemic state, in which people seem to have a sense of understanding, but are unable to answer even basic questions, make very easy predictions, etc. I suspect what’s going on is a bit like if someone reads some well written science popularization book about quantum mechanics but actually lacks concepts like complex numbers or vector spaces, they may have somewhat superficial sense of understanding.
Obviously active inference has a lot to say about how people self-model themselves. For example, when typing these words, I assume it’s me who types them (and not someone else, for example). Why? That’s actually important question for why self. Why not, or to what extent not in LLMs? How stories that people tell themselves about who they are impact what they do is totally something which makes sense to understand from active inference perspective.
Fair enough — is there a source you’d most recommend for learning more?