The singular vectors of the Jacobian between two layers seems more similar to what you’re doing in the OP than the Hessian of the objective function does? Because the Hessian of the objective function sort of forces it all to be mediated by the final probabilities, which means it discounts directions in activation space that don’t change the probabilities yet, but would change the probabilities if the change in activations was scaled up beyond infinitesimal.
Edit: wait, maybe I misunderstood, I assumed by the objective function you meant some cross-entropy on the token predictions, but I guess in-context it’s more likely you meant the objective function for the magnitude of change in later layer activations induced by a given activation vector?
I wonder if a similar technique could form the foundation for a fully general solution to the alignment problem. Like mathematically speaking all this technique needs is a vector-to-vector function, and it’s not just layer-to-layer relationships that can be understood as a vector-valued function; the world as a function of the policy is also vector-valued.
I.e. rather than running a search to maximize some utility function, a model-based agent could run a search for small changes in policy that have a large impact on the world. If one can then taxonomize, constrain and select between these impacts, one might be able to get a highly controllable AI.
Obviously there’s some difficulties here because the activations are easier to search over since we have an exact way to calculate them. But that’s a capabilities question rather than an alignment question.