Nietzsche also had mixed views on Socrates, for similar reasons. He talks about this in many of his books, including “The Birth of Tragedy” and “Gay Science”.
Daniel Murfet
The set of motivated, intelligent people with the relevant skills to do technical alignment work in general, and mechanistic interpretability in particular, has a lot of overlap with the set of people who can do capabilities work. That includes many academics, and students in masters and PhD programs. One way or another they’re going to publish, would you rather it be alignment/interpretability work or capabilities work?
It seems to me that speeding up alignment work by several orders of magnitude is unlikely to happen without co-opting a significant number of existing academics, labs and students in related fields (including mathematics and physics in addition to computer science). This is happening already, within ML groups but also physics (Max Tegmark’s students) and mathematics (e.g. some of my students at the University of Melbourne).
I have colleagues in my department publishing stacks of papers in CVPR, NeurIPS etc., which this community might call capabilities work. If I succeeded in convincing them to do some alignment or mechanistic interpretability work, they would do it because it was intrinsically interesting or likely to be high status. They would gravitate towards the kinds of work that are dual-use. Relative to the status quo that seems like progress to me, but I’m genuinely interested in the opinion of people here. Real success in this recruitment would, among other things, dilute the power of LW norms to influence things like publishing.
On balance it seems to me beneficial to aggressively recruit academics and their students into alignment and interpretability.
I think this is a very nice way to present the key ideas. However, in practice I think the discretisation is actually harder to reason about than the continuous version. There are deeper problems, but I’d start by wondering how you would ever compute c(f) defined this way, since it seems to depend in an intricate way on the details of e.g. the floating point implementation.
I’ll note that the volume codimension definition of the RLCT is essentially what you have written down here, and you don’t need any mathematics beyond calculus to write that down. You only need things like resolutions of singularities if you actually want to compute that value, and the discretisation doesn’t seem to offer any advantage there.
Thanks for the article. For what it’s worth, here’s the defence I give of Agent Foundations and associated research, when I am asked about it (for background, I’m a mathematician, now working on mathematical aspects of AI safety different from Agent Foundations). I’d be interested if you disagree with this framing.
We can imagine the alignment problem coming in waves. Success in each wave merely buys you the chance to solve the next. The first wave is the problem we see in front of us right now, of getting LLMs to Not Say Naughty Things, and we can glimpse a couple of waves after that. We don’t know how many waves there are, but it is reasonable to expect that beyond the early waves our intuitions probably aren’t worth much.
That’s not a surprise! As physics probed smaller scales, at some point our intuitions stopped being worth anything, and we switched to relying heavily on abstract mathematics (which became a source of different, more hard-won intuitions). Similarly, we can expect that as we scale up our learning machines, we will enter a regime where current intuitions fail to be useful. At the same time, the systems may be approaching more optimal agents, and theories like Agent Foundations start to provide a very useful framework for reasoning about the nature of the alignment problem.
So in short I think of Agent Foundations as like quantum mechanics: a bit strange perhaps, but when push comes to shove, one of the few sources of intuition we have about waves 4, 5, 6, … of the alignment problem. It would be foolish to bet everything on solving waves 1, 2, 3 and then be empty handed when wave 4 arrives.
If the cost is a problem for you, send a postal address to daniel.murfet@gmail.com and I’ll mail you my physical copy.
That intuition sounds reasonable to me, but I don’t have strong opinions about it.
One thing to note is that training and test performance are lagging indicators of phase transitions. In our limited experience so far, measures such as the RLCT do seem to indicate that a transition is underway earlier (e.g. in Toy Models of Superposition), but in the scenario you describe I don’t know if it’s early enough to detect structure formation “when it starts”.
For what it’s worth my guess is that the information you need to understand the structure is present at the transition itself, and you don’t need to “rewind” SGD to examine the structure forming one step at a time.
Induction heads? Ok, we are maybe on track to retro engineer the mechanism of regex in LLMs. Cool.
This dramatically undersells the potential impact of Olsson et al. You can’t dismiss modus ponens as “just regex”. That’s the heart of logic!
For many the argument for AI safety being a urgent concern involves a belief that current systems are, in some rough sense, reasoning, and that this capability will increase with scale, leading to beyond human-level intelligence within a timespan of decades. Many smart outsiders remain sceptical, because they are not convinced that anything like reasoning is taking place.
I view Olsson et al as nontrivial evidence for the emergence of internal computations resembling reasoning, with increasing scale. That’s profound. If that case is made stronger over time by interpretability (as I expect it to be) the scientific, philosophical and societal impact will be immense.
4. Goals misgeneralize out of distribution.
See: Goal misgeneralization: why correct specifications aren’t enough for correct goals, Goal misgeneralization in deep reinforcement learning
OAA Solution: (4.1) Use formal methods with verifiable proof certificates[2]. Misgeneralization can occur whenever a property (such as goal alignment) has been tested only on a subset of the state space. Out-of-distribution failures of a property can only be ruled out by an argument for a universally quantified statement about that property—but such arguments can in fact be made! See VNN-COMP. In practice, it will not be possible to have enough information about the world to “prove” that a catastrophe will not be caused by an unfortunate coincidence, but instead we can obtain guaranteed probabilistic bounds via stochastic model checking.
Based on the Bold Plan post and this one my main point of concern is that I don’t believe in the feasibility of the model checking, even in principle. The state space S and action space A of the world model will be too large for techniques along the lines of COOL-MC which (if I understand correctly) have to first assemble a discrete-time Markov chain by querying the NN and then try to apply formal verification methods to that. I imagine that actually you are thinking of learned coarse-graining of both S and A, to which one applies something like formal verification.
Assuming that’s correct, then there’s an inevitable lack of precision on the inputs to the formal verification step. You have to either run the COOL-MC-like process until you hit your time and compute budget and then accept that you’re missing state-action pairs, or you coarse-grain to some degree within your budget and accept a dependence on the quality of your coarse-graining. If you’re doing an end-run around this tradeoff somehow, could you direct me to where I can read more about the solution?
I know there’s literature on learned coarse-grainings of S and A in the deep RL setting, but I haven’t seen it combined with formal verification. Is there a literature? It seems important.
I’m guessing that this passage in the Bold Plan post contains your answer:
> Defining a sufficiently expressive formal meta-ontology for world-models with multiple scientific explanations at different levels of abstraction (and spatial and temporal granularity) having overlapping domains of validity, with all combinations of {Discrete, Continuous} and {time, state, space}, and using an infra-bayesian notion of epistemic state (specifically, convex compact down-closed subsets of subprobability space) in place of a Bayesian stateIn which case I see where you’re going, but this seems like the hard part?
- 28 Aug 2023 0:49 UTC; 5 points) 's comment on A list of core AI safety problems and how I hope to solve them by (
Thanks, that makes a lot of sense to me. I have some technical questions about the post with Owen Lynch, but I’ll follow up elsewhere.
Not easily detected. As in, there might be a sudden (in SGD steps) change in the internal structure of the network over training that is not easily visible in the loss or other metrics that you would normally track. If you think of the loss as an average over performance on many thousands of subtasks, a change in internal structure (e.g. a circuit appearing in a phase transition) relevant to one task may not change the loss much.
To see this, we use a slight refinement of the dynamical estimator, where we restrict sampling to lie within the normal hyperplane of the gradient vector at initialization, which seems to make this behavior more robust.
Could you explain the intuition behind using the gradient vector at initialization? Is this based on some understanding of the global training dynamics of this particular network on this dataset?
Oh that makes a lot of sense, yes.
Great question, thanks. tldr it depends what you mean by established, probably the obstacle to establishing such a thing is lower than you think.
To clarify the two types of phase transitions involved here, in the terminology of Chen et al:
Bayesian phase transition in number of samples: as discussed in the post you link to in Liam’s sequence, where the concentration of the Bayesian posterior shifts suddenly from one region of parameter space to another, as the number of samples increased past some critical sample size . There are also Bayesian phase transitions with respect to hyperparameters (such as variations in the true distribution) but those are not what we’re talking about here.
Dynamical phase transitions: the “backwards S-shaped loss curve”. I don’t believe there is an agreed-upon formal definition of what people mean by this kind of phase transition in the deep learning literature, but what we mean by it is that the SGD trajectory is for some time strongly influenced (e.g. in the neighbourhood of) a critical point and then strongly influenced by another critical point . In the clearest case there are two plateaus, the one with higher loss corresponding to the label and the one with the lower loss corresponding to . In larger systems there may not be a clear plateau (e.g. in the case of induction heads that you mention) but it may still reasonable to think of the trajectory as dominated by the critical points.
The former kind of phase transition is a first-order phase transition in the sense of statistical physics, once you relate the posterior to a Boltzmann distribution. The latter is a notion that belongs more to the theory of dynamical systems or potentially catastrophe theory. The link between these two notions is, as you say, not obvious.
However Singular Learning Theory (SLT) does provide a link, which we explore in Chen et al. SLT says that the phases of Bayesian learning are also dominated by critical points of the loss, and so you can ask whether a given dynamical phase transition has “standing behind it” a Bayesian phase transition where at some critical sample size the posterior shifts from being concentrated near to being concentrated near .
It turns out that, at least for sufficiently large , the only real obstruction to this Bayesian phase transition existing is that the local learning coefficient near should be higher than near . This will be hard to prove theoretically in non-toy systems, but we can estimate the local learning coefficient, compare them, and thereby provide evidence that a Bayesian phase transition exists.
This has been done in the Toy Model of Superposition in Chen et al, and we’re in the process of looking at a range of larger systems including induction heads. We’re not ready to share those results yet, but I would point you to Nina Rimsky and Dmitry Vaintrob’s nice post on modular addition which I would say provides evidence for a Bayesian phase transition in that setting.
There are some caveats and details, that I can go into if you’re interested. I would say the existence of Bayesian phase transitions in non-toy neural networks is not established yet, but at this point I think we can be reasonably confident they exist.
I think it is too early to know how many phase transitions there are in e.g. the training of a large language model. If there are many, it seems likely to me that they fall along a spectrum of “scale” and that it will be easier to find the more significant ones than the less significant ones (e.g. we discover transitions like the onset of in-context learning first, because they dramatically change how the whole network computes).
As evidence for that view, I would put forward the fact that putting features into superposition is known to be a phase transition in toy models (based on the original post by Elhage et al and also our work in Chen et al) and therefore seems likely to be a phase transition in larger models as well. That gives an example of phase transitions at the “small” end of the scale. At the “big” end of the scale, the evidence in Olsson et al that induction heads and in-context learning appears in a phase transition seems convincing to me.
On general principles, understanding “small” phase transitions (where the scale is judged relative to the overall size of the system, e.g. number of parameters) is like probing a physical system at small length scales / high energy, and will require more sophisticated tools. So I expect that we’ll start by gaining a good understanding of “big” phase transitions and then as the experimental methodology and theory improves, move down the spectrum towards smaller transitions.
On these grounds I don’t expect us to be swamped by the smaller transitions, because they’re just hard to see in the first place; the major open problem in my mind is how far we can get down the scale with reasonable amounts of compute. Maybe one way that SLT & developmental interpretability fails to be useful for alignment is if there is a large “gap” in the spectrum, where beyond the “big” phase transitions that are easy to see (and for which you may not need fancy new ideas) there is just a desert / lack of transitions, and all the transitions that matter for alignment are “small” enough that a lot of compute and/or very sophisticated ideas are necessary to study them. We’ll see!
Yes I think that’s right. I haven’t closely read the post you link to (but it’s interesting and I’m glad to have it brought to my attention, thanks) but it seems related to the kind of dynamical transitions we talk briefly about in the Related Works section of Chen et al.
That’s what we’re thinking, yeah.
Thanks Akash. Speaking for myself, I have plenty of experience supervising MSc and PhD students and running an academic research group, but scientific institution building is a next-level problem. I have spent time reading and thinking about it, but it would be great to be connected to people with first-hand experience or who have thought more deeply about it, e.g.
People with advice on running distributed scientific research groups
People who have thought about scientific institution building in general (e.g. those with experience starting FROs in biosciences or elsewhere)
People with experience balancing fundamental research with product development within an institution
I am seeking advice from people within my institution (University of Melbourne) but Timaeus is not a purely academic org and their experience does not cover all the hard parts.
because a physicist made these notes
Grumble :)
nonsingular
singular
By the zero-shot hyperparameter work do you mean https://arxiv.org/abs/2203.03466 “Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer”? I’ve been sceptical of NTK-based theory, seems I should update.