“A teacher who used the voice of authority exactly when appropriate, rather than inflexibly applying it in every case, could have zero entropy and still be very adaptive/flexible.” I’m not sure I would call this teacher adaptable. I might call them adapted in the sense that they’re functioning well in their current environment, but if the environment changed in some way (so that actions in the current state no longer led to the same range of consequences in later states), they would fail to adapt. (Horney would call this person neurotic but successful.)
It’s not so much about the shallowness or short-sightedness, as I understand it (though the teacher and people-pleasing friend examples were very simple policies). A child might, for example, develop an incredibly elaborate policy over the course of childhood to cope with an eruptive parent (be nice when mom is sober, be in your bedroom when she isn’t, unless she calls you from the other room in which case you better show up quick, make sure there’s beer in the house but not too much). Yet they might still fail to update that elaborate (and well-adapted policy) when they encounter women who remind them of their mothers later on in life, and this causes them to be misaligned with the new women in their lives, which causes suffering for all involved.
Or a successful executive might have developed incredibly elaborate policies for project management and interpersonal conflict that served them well in their corporate environment and led to many promotions...and then discover when they retire that there is some very low-entropy state in their policy that serves them very poorly when “managing projects” with their family in retirement (“Grandma retired and she treats everyone like her employee!”). And this causes misalignment with their family system, which causes suffering.
Does this elaboration of the metaphor improve the mapping between the therapeutic situation and the policy entropy collapse dynamic in the AI papers?
(If I understand right, you can even point these two therapy examples more directly to the equation from the Cui et al. paper. In both examples, the client has made an exploitation/exploration trade-off that optimized performance. The successful executive was able to outcompete her colleagues in the workplace, but it came at the cost of selecting H=0, R = -a + b. This mirrors the casual observation that the siblings who adapted best to their troubled households growing up end up being the least able to adapt quickly to adulthood; that students who make the highest grades in school end up having more trouble adapting to the workplace or the dissertation stage of PhD programs; or that professionals who find the most success at work end up having more trouble adjusting to retirement...though these are of course very broad, hand-wavey observations with innumerable exceptions).
First I want to make sure I understand the question, as there are a lot of moving pieces. I think you are asking why higher policy entropy (the type of entropy discussed in Cui et al) increases adaptability in the example with the teacher, why the example teacher cannot (or does not) pursue an optimal Bayesian exploration strategy, and from whose perspective entropy is measured in the example. If I’ve misunderstood, please ignore what follows.
Model the teacher as having a strategy S that’s always correct in her original environment, and occasionally (say 1⁄50 times) she accidentally uses strategy S’ which is always wrong and gets punished. Over time, this punishment drives the probability of using S’ down to nearly zero—maybe 1/1000 or less.
Then the environment changes. Now S only works half the time (penalty of −1 when wrong) and S’ works every time (if only she would use it!). But the problem is that she’s using S 999 out of every 1000 times and getting an average reward of 0. Meanwhile S’ only has that tiny probability of 1/1000 of happening, and when it does occur, the gradient update is proportional to both the probability (0.001) and the advantage (≈1), so P(S’) only increases by 0.001. Since she only samples S’ once per thousand actions, she’d need many thousand actions to eventually recognize S’ as superior.
The problem is that the exploration that could improve her life has been trained out of her policy/behavior pattern. The past environment punished deviations so effectively that when the world changes, she lacks the behavioral variance to discover the new optimal strategy. (This maps onto the therapy examples: the child who learned never to speak up in an abusive home has near-zero probability of assertive communication, even when they’re finally in a safe environment where assertion would be rewarded).
Why doesn’t she update like a perfect Bayesian agent? If she did, she would know the environment had changed and calculate the likelihood. The failures of S would surprise her: she’d realize something changed and she’d recognize that the optimal strategy might have changed as well. Then she would take the information collection/learning value of trying new strategies into account before choosing her next action. In the LLM case, this doesn’t happen because it’s not how LLMs are trained (at least not in Cui et al...I’m in no position to say what’s happening with frontier LLM training irl). As for whether this hurts the metaphor (since humans are not purely learning from policy gradients like Cui et al LLMs), I don’t think so. Humans are better Bayesians than the LLMs, but still not very good (dopamine-mediated temporal difference learning in the basal ganglia is basically RLHF afaik, plus habits, base rate bias, confirmation bias, limited cognitive capacity to recognize environmental change, ego protection, etc etc). And the situations where we’re least successful Bayesians are just those situations which often drive us into therapy (assuming the situation matters). You could probably even frame a decent chunk of therapy interventions (especially REBT, CBT, and solutions-oriented therapies) as attempts to move people towards Bayesian patterns.
And the last piece, entropy being subjective, would be just the point of therapy and some of the interventions described in the other recent RLHF+ papers. From the LLM’s point of view (pardon my anthropomorphism), policy entropy is zero (or near zero). But the researcher can see that there are alternative actions, and hence makes design choices to increase the probability that those choices will be tried in future training cycles. Likewise, one benefit of therapy is the broader perspective on humanity (especially on aspects of humanity tied to shame or cultural taboos which aren’t often talked about in daily life) that we as individuals don’t always see since we don’t get as much privileged access to a large variety of other people’s’ inner lives.