AI notkilleveryoneism researcher at Apollo, focused on interpretability.
Thinking the example through a bit further: In a ReLU layer, features are all confined to the positive quadrant. So superposed features computed in a ReLU layer all have positive inner product. So if I send the output of one ReLU layer implementing n2 AND gates in superposition directly to another ReLU layer implementing another n2 ANDs on a subset of the outputs of that previous layer, the assumption that input directions are equally likely to have positive and negative inner products is not satisfied.
Maybe you can fix this with bias setoffs somehow? Not sure at the moment. But as currently written, it doesn’t seem like I can use the outputs of one layer performing a subset of ANDs as the inputs of another layer performing another subset of ANDs.EDIT: Talked it through with Jake. Bias setoff can help, but it currently looks to us like you still end up with AND gates that share a variable systematically having positive sign in their inner product. Which might make it difficult to implement a valid general recipe for multi-step computation if you try to work out the details.
A very central use case for a superposed boolean general computer. Otherwise you don’t actually get to implement any serial computation.
Noting out loud that I’m starting to feel a bit worried about the culture-war-like tribal conflict dynamic between AIS/LW/EA and e/acc circles that I feel is slowly beginning to set in on our end as well, centered on Twitter but also present to an extent on other sites and in real life. The potential sanity damage to our own community and possibly future AI policy from this should it intensify is what concerns me most here.
People have tried to suck the rationalist diaspora into culture-war-like debates before, and I think the diaspora has done a reasonable enough job of surviving intact by not taking the bait much. But on this topic, many of us actually really care about both the content of the debate itself and what people outside the community think of it, and I fear it is making us more vulnerable to the algorithms’ attempts to infect us than we have been in the past.
I think us going out of our way to keep standards high in memetic public spaces might possibly help some in keeping our own sanity from deteriorating. If we engage on Twitter, maybe we don’t just refrain from lowering the level of debate and using arguments as soldiers but try to have a policy of actively commenting to correct the record when people of any affiliation make locally-invalid arguments against our opposition if we would counterfactually also correct the record were such a locally-invalid argument directed against us or our in-group. I think high status and high Twitter/Youtube-visible community members’ behavior might end up having a particularly high impact on the eventual outcome here.
Having digested this a bit more, I’ve got a question regarding the noise terms, particularly for section 1.3 that deals with constructing general programs over sparse superposed variables.
Unfortunately, since the →f1,…,→fm are random vectors, their inner product will have a typical size of 1/√d0. So, on an input which has no features connected to neuron i, the preactivation for that neuron will not be zero: it will be a sum of these interference terms, one for each feature that is connected to the neuron. Since the interference terms are uncorrelated and mean zero, they start to cause neurons to fire incorrectly when Θ(d0) neurons are connected to each neuron. Since each feature is connected to each neuron with probability p=log2d0√d) this means neurons start to misfire when m=~Θ(d0√d).
It seems to me that the assumption of uncorrelated errors here is rather load-bearing. If you don’t get uncorrelated errors over the inputs you actually care about, you are forced to scale back to connecting only √d0 features to every neuron, correct? And the same holds for the construction right after this one, and probably most of the other constructions shown here?
And if you only get √d0 connected features per neuron, you scale back to only being able to compute |E|=~Θ(√d0d) arbitrary AND gates per layer, correct?
Now, the reason these errors are ‘uncorrelated’ is that the features were embedded as random vectors in our layer space. In other words, the distributions over which they are uncorrelated is the distribution of feature embeddings and sets of neurons chosen to connect to particular features. So for any given network, we draw from this distribution only once, when the weights of the network are set, and then we are locked into it.
So this noise will affect particular sets of inputs strongly, systematically, in the same direction every time. If I divide the set of features into two sets, where features in each half are embedded along directions that have a positive inner product with each other, I can’t connect more than √d0 from the same half to the same neuron without making it misfire, right? So if I want to implement a layer that performs |E|=~Θ(d0d) ANDs on exactly those features that happen to be embedded within the same set, I can’t really do that. Now, for any given embedding, that’s maybe only some particular sets of features which might not have much significance to each other. But then the embedding directions of features in later layers depend on what was computed and how in the earlier layers, and the limitations on what I can wire together apply every time.
I am a bit worried that this and similar assumptions about stochasticity here might turn out to prevent you from wiring together the features you need to construct arbitrary programs in superposition, with ‘noise’ from multiple layers turning out to systematically interact in exactly such a way as to prevent you from computing too much general stuff. Not because I see a gears-level way this could happen right now, but because I think rounding off things to ‘noise’ that are actually systematic is one of these ways an exciting new theory can often go wrong and see a structure that isn’t there, because you are not tracking the parts of the system that you have labeled noise and seeing how the systematics of their interactions constrain the rest of the system.
Like making what seems like a blueprint for perpetual motion machine because you’re neglecting to model some small interactions with the environment that seem like they ought not to affect the energy balance on average, missing how the energy losses/gains in these interactions are correlated with each other such that a gain at one step immediately implies a loss in another.
Aside from looking at error propagation more, maybe a way to resolve this might be to switch over to thinking about one particular set of weights instead of reasoning about the distribution the weights are drawn from?
E.g. pick some hyperplanes and declare everything on one side of all of them to be the first set.
Update February 2024: I left Ireland over a year ago, and the group is probably dead now, unfortunately. There’s still an EA group around, which as of this writing seems quite active.
If the SAEs are not full-distribution competitive, I don’t really trust that the features they’re seeing are actually the variables being computed on in the sense of reflecting the true mechanistic structure of the learned network algorithm and that the explanations they offer are correct. If I pick a small enough sub-distribution, I can pretty much always get perfect reconstruction no matter what kind of probe I use, because e.g. measured over a single token the network layers will have representation rank 1, and the entire network can be written as a rank-1 linear transform. So I can declare the activation vector at layer l to be the active “feature”, use the single entry linear maps between SAEs to “explain” how features between layers map to each other, and be done. Those explanations will of course be nonsense and not at all extrapolate out of distributon. I can’t use them to make a causal model that accurately reproduces the network’s behavior or some aspect of it when dealing with a new prompt.
We don’t train SAEs on literally single tokens, but I would be worried about the qualitative problem persisting. The network itself doesn’t have a million different algorithms to perform a million different narrow subtasks. It has a finite description length. It’s got to be using a smaller set of general algorithms that handle all of these different subtasks, at least to some extent. Likely more so for more powerful and general networks. If our “explanations” of the network then model it in terms of different sets of features and circuits for different narrow subtasks that don’t fit together coherently to give a single good reconstruction loss over the whole distribution, that seems like a sign that our SAE layer activations didn’t actually capture the general algorithms in the network. Thus, predictions about network behaviour made on the basis of inspecting causal relationships between these SAE activations might not be at all reliable, especially predictions about behaviours like instrumental deception which might be very mechanistically related to how the network does well on cross-domain generalisation.
As in, that seems like a minimum requirement for the SAEs to fulfil. Not that this would be enough to make me trust predictions about generalisation based on stories about SAE activations.
Our reconstruction scores were pretty good. We found GPT2 small achieves a cross entropy loss of about 3.3, and with reconstructed activations in place of the original activation, the CE Log Loss stays below 3.6.
Unless my memory is screwing up the scale here, 0.3 CE Loss increase seems quite substantial? A 0.3 CE loss increase on the pile is roughly the difference between Pythia 410M and Pythia 2.8B. And do I see it right that this is the CE increase maximum for adding in one SAE, rather than all of them at the same time? So unless there is some very kind correlation in these errors where every SAE is failing to reconstruct roughly the same variance, and that variance at early layers is not used to compute the variance SAEs at later layers are capturing, the errors would add up? Possibly even worse than linearly? What CE loss do you get then?
Have you tried talking to the patched models a bit and compared to what the original model sounds like? Any discernible systematic differences in where that CE increase is changing the answers?
Can someone destroy my hope early by giving me the Molochian reasons why this change hasn’t been made already and never will be?
MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn’t attract the best talent.If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it’s been successful.
MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn’t attract the best talent.
If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it’s been successful.
AISC isn’t trying to do what MATS does. Anecdotal, but for me, MATS could not have replaced AISC (spring 2022 iteration). It’s also, as I understand it, trying to have a structure that works without established mentors, since that’s one of the large bottlenecks constraining the training pipeline.
Also, did most of the past camps ever have lots of established mentors? I thought it was just the one in 2022 that had a lot? So whatever factors made all the past AISCs work and have participants sing their praises could just still be there.
Why does the founder, Remmelt Ellen, keep posting things described as “content-free stream of consciousness”, “the entire scientific community would probably consider this writing to be crankery”, or so obviously flawed it gets −46 karma? This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding.
He was posting cranky technical stuff during my camp iteration too. The program was still fantastic. So whatever they are doing to make this work seems able to function despite his crankery. With a five year track record, I’m not too worried about this factor.
All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier.
In the first link at least, there are only eight papers listed in total though. With the first camp being in 2018, it doesn’t really seem like the rate dropped much? So to the extent you believe your colleagues that the camp used to be good, I don’t think the publication record is much evidence that it isn’t anymore. Paper production apparently just does not track the effectiveness of the program much. Which doesn’t surprise me, I don’t think the rate of paper producion tracks the quality of AIS research orgs much either.
The impact assessment was commissioned by AISC, not independent. They also use the number of AI alignment researchers created as an important metric. But impact is heavy-tailed, so the better metric is value of total research produced. Because there seems to be little direct research, to estimate the impact we should count the research that AISC alums from the last two years go on to produce. Unfortunately I don’t have time to do this.
Agreed on the metric being not great, and that an independently commissioned report would be better evidence (though who would have comissioned it?). But ultimately, most of what this report is apparently doing is just asking a bunch of AIS alumni what they thought of the camp and what they were up to, these days. And then noticing that these alumni often really liked it and have apparently gone on to form a significant fraction of the ecosystem. And I don’t think they even caught everyone. IIRC our AISC follow-up LTFF grant wasn’t part of the spreadsheets until I wrote Remmelt that it wasn’t there.
I am not surprised by this. Like you, my experience is that most of my current colleagues who were part of AISC tell me it was really good. The survey is just asking around and noticing the same.
I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there’s a good chance I would never have become an AI notkilleveryoneism researcher.
It feels like a very large number of people I meet in AIS today got their start in one AISC iteration or another, and many of them seem to sing its praises. I think 4⁄6 people currently on our interp team were part of one of the camps. I am not aware of any other current training program that seems to me like it would realistically replace AISC’s role, though I admittedly haven’t looked into all of them. I haven’t paid much attention to the iteration that happened in 2023, but I happen to know a bunch of people who are in the current iteration and think trying to run a training program for them is an obviously good idea.
I think MATS and co. are still way too tiny to serve all the ecosystem’s needs, and under those circumstances, shutting down a training program with an excellent five year track record seems like an even more terrible idea than usual. On top of that, the research lead structure they’ve been trying out for this camp and the last one seems to me like it might have some chance of being actually scalable. I haven’t spend much time looking at the projects for the current iteration yet, but from very brief surface exposure they didn’t seem any worse on average than the ones in my iteration. Which impressed and surprised me, because these projects were not proposed by established mentors like the ones in my iteration were. A far larger AISC wouldn’t be able to replace what a program like MATS does, but it might be able to do what AISC6 did for me, and do it for far more people than anything structured like MATS realistically ever could.
On a more meta point, I have honestly not been all that impressed with the average competency of the AIS funding ecosystem. I don’t think it not funding a project is particularly strong evidence that the project is a bad idea.
Well. Damn. As a vocal critic of the whole concept of superposition, this post has changed my mind a lot. An actual mathematical definition that doesn’t depend on any fuzzy notions of what is ‘human interpretable’, and a start on actual algorithms for performing general, useful computation on overcomplete bases of variables.Everything I’ve read on superposition before this was pretty much only outlining how you could store and access lots of variables from a linear space with sparse encoding, which isn’t exactly a revelation. Every direction is a float, so of course the space can store about float precision to the n-th power different states, which you can describe as superposed sparse features if you like. But I didn’t need to use that lens to talk about the compression. I could just talk about good old non-overcomplete linear algebra bases instead. The ≤n basis vectors in that linear algebra description being the compositional summary variables the sparse inputs got compressed into. If basically all we can do with the ‘superposed variables’ is make lookup tables of them, there didn’t seem to me to be much need for the concept at all to reverse engineer neural networks. Just stick with the summary variables, summarising is what intelligence is all about.
If we can do actual, general computation with the sparse variables? Computations with internal structure that we can’t trivially describe just as well using ≤n floats forming the non-overcomplete linear basis of a vector space? Well, that would change things.
As you note, there’s certainly work left to do here on the error propagation and checking for such algorithms in real networks. But even with this being an early proof of concept, I do now tentatively expect that better-performing implementations of this probably exist. And if such algorithms are possible, they sure do sound potentially extremely useful for an LLM’s job. On my previous superposition-skeptical models, frameworks like the one described in this post are predicted to be basically impossible. Certainly way more cumbersome than this looks. So unless these ideas fall flat when more research is done on the error tolerance, I guess I was wrong. Oops.
I think the idea expressed in the post is for our entire observable universe to be a remnant of such spaghettificiation in higher dimensions, with basically no thickness along the direction leading to the singularity remaining. So whatever higher dimensional bound structure the local quantum fields may or may not usually be arranged in is (mostly) gone, and the merely 3+1 dimensional structures of atoms and pelvises we are used to are the result.I wouldn’t know off the top of my head if you can make this story mathematically self-consistent or not.
Maybe a⊕b is represented “indicentally” because NN representations are high-dimensional with lots of stuff represented by chance
This would be my first guess, conditioned on the observation being real, except strike “by chance”. The model likely wants to form representations that can serve to solve a very wide class of prediction tasks over the data with very few non-linearities used, ideally none, as in a linear probe. That’s pretty much the hallmark of a good general representation you can use for many tasks.I thus don’t think that comparing to a model with randomized weights is a good falsification. I wouldn’t expect a randomly initialized model to have nice general representations.
My stated hypothesis here would then predict that the linear probes for XOR features get progressively worse if you apply them to earlier layers. Because the model hasn’t had time to make the representation as general that early in the computation. So accuracy should start to drop as you look at layers before fourteen.
I’ll also say that if you can figure out a pattern in how particular directions get used as components for many different boolean classification tasks, that seems like the kind of thing that might result in an increased understanding of what these directions encode exactly. What does the layer representation contain, in actual practice, that allows it to do this?
Even architectures-in-the-narrow-sense don’t show overarching scaling laws at current scales, right? IIRC the separate curves for MLPs, LSTMs and transformers do not currently match up into one larger curve. See e.g. figure 7 here.
So a sudden capability jump due to a new architecture outperforming transformers the way transformers outperform MLPs at equal compute cost seems to be very much in the cards?
I intuitively agree that current scaling laws seem like they might be related in some way to a deep bound on how much you can do with a given amount of data and compute, since different architectures do show qualitatively similar behavior even if the y-axes don’t match up. But I see nothing to suggest that any current architectures are actually operating anywhere close to that bound.
If it only requires a simple hack to existing public SOTA, many others will have already thought of said hack and you won’t have any additional edge.
I don’t recall assuming the edge to be unique? That seems like an unneeded condition for Tamsin’s argument, it’s enough to believe the field consensus isn’t completely efficient by default and all relevant actors are sure of all currently deducable edges at all times.
Progress in DL is completely smooth.
Right, if you think it’s completely smooth and thus basically not meaningfully influenced by the actions of individual researchers whatsoever, I see why you would not buy Tamsin’s argument here. But then the reason you don’t buy it would seem to me to be that you think meaningful new ideas in ML capability research basically don’t exist, not because you think there is some symmetric argument to Tamsin’s for people to stay quiet about new alignment research ideas.
I don’t see why this would be ridiculous. To me, e.g. “Superintelligence only requires [hacky change to current public SOTA] to achieve with expected 2025 hardware, and OpenAI may or may not have realised that already” seems like a perfectly coherent way the world could be, and is plenty of reason for anyone who suspects such a thing to keep their mouth shut about gears-level models of  that might be relevant for judging how hard and mysterious the remaining obstacles to superintelligence actually are.
It’s not that hard to build an AI that saves everyone: you just need to solve [some problems] and combine the solutions. Considering how easy it is compared to what you thought, you should decrease your P(doom) / shorten your timelines.
I’m not sure what you’re saying here exactly. It seems to me like you’re pointing to a symmetric argument favoring low doom, but if someone had an idea for how to do AI alignment right, why wouldn’t they just talk about it? Doesn’t seem symmetrical to me.
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
Did that change because people finally finished doing enough basic strategy research to know what policies to ask for? It didn’t seem like that to me. Instead, my impression was that it was largely triggered by ChatGPT and GPT4 making the topic more salient, and AI safety feeling more inside the Overton window. So there were suddenly a bunch of government people asking for concrete policy suggestions.
“Pandemics” aren’t a locally valid substitute step in my own larger argument, because an ASI needs its own manufacturing infrastructure before it makes sense for the ASI to kill the humans currently keeping its computers turned on.
When people are highly skeptical of the nanotech angle yet insist on a concrete example, I’ve sometimes gone with a pandemic coupled with limited access to medications that temporarily stave off, but don’t cure, that pandemic as a way to force a small workforce of humans preselected to cause few problems to maintain the AI’s hardware and build it the seed of a new infrastructure base while the rest of humanity dies. I feel like this has so far maybe been more convincing and perceived as “less sci-fi” than Drexler-style nanotech by the people I’ve tried it on (small sample size, n<10).Generally, I suspect not basing the central example on a position on one side of yet another fierce debate in technology forecasting trumps making things sound less like a movie where the humans might win. The rate of people understanding that something sounding like a movie does not imply the humans have a realistic chance at winning in real life just because they won in the movie seems, in my experience with these conversations so far, to exceed the rate of people getting on board with scenarios that involve any hint of Drexler-style nanotech.
For example, if an SAE gives us 16x as many dimensions as the original activations, and we find that half of those are interpretable, to me this seems like clear evidence of superposition (8x as many interpretable directions!). How would you interpret that phenomena?
I don’t have the time and energy to do this properly right now, but here’s a few thought experiments to maybe help communicate part of what I mean:Say you have a transformer model that draws animals. As in, you type “draw me a giraffe”, and then it draws you a giraffe. Unknown to you, the way the model algorithm works is that the first thirty layers of the model perform language processing to figure out what you want drawn, and output a summary of fifty scalar variables that the algorithms in the next thirty layers of the model use to draw the animals. And these fifty variables are things like “furriness”, “size”, “length of tail” and so on.The latter half of the model does then not, in any real sense, think of the concept “giraffe” while it draws the giraffe. It is just executing purely geometric algorithms that use these fifty variables to figure out what shapes to draw. If you then point a sparse autoencoder at the residual stream in the latter half of the model, over a data set of people asking the network to draw lots of different animals, far more than fifty or the network width, I’d guess the “sparse features” the SAE finds might be the individual animal types. “Giraffe”, “elephant”, etc. . Or, if you make the encoder dictionary larger, more specific sparse features like “fat giraffe” would start showing up. And then, some people may conclude that the model was doing a galaxy-brained thing where it was thinking about all of these animals using very little space, compressing a much larger network in which all these animals are variables. This is kind of true in a certain sense if you squint, but pretty misleading. The model at this point in the computation no longer “knows” what a giraffe is. It just “knows” what the settings of furriness, tail length, etc. are right now. If you manually go into the network and set the fifty variables to something that should correspond to a unicorn, the network will draw you a unicorn, even if there were no unicorns in the training data and the first thirty layers in the network don’t know how to set the fifty variables to draw one. So in a sense, this algorithm is more general than a cleverly compressed lookup table of animals would be. And if you want to learn how the geometric algorithms that do the drawing work, what they do with the fifty scalar summary statistics is what you will need to look at.Just because we can find a transformation that turns an NNs activations into numbers that correlate with what a human observer would regard as separate features of the data, does not mean the model itself is treating these as elementary variables in its own computations in any meaningful sense. The only thing the SAE is showing you is that the information present in the model can be written as a sum of some sparsely activating generators of the data. This does not mean that the model is processing the problem in terms of these variables. Indeed, SAE dictionaries are almost custom-selected not to give you variables that a well-generalizing algorithm would use to think about problems with big, complicated state spaces. Good summary variables are highly compositional, not sparse. They can all be active at the same time in any setting, letting you represent the relevant information from a large state space with just a few variables, because they factorise. Temperature and volume are often good summary variables for thinking about thermodynamic systems because the former tells you nothing about the latter and they can co-occur in any combination of values. Variables with strong sparsity conditions on them instead have high mutual information, making them partially redundant, and ripe for compressing away into summary statistics.If an NN (artificial or otherwise) is, say, processing images coming in from the world, it is dealing with an exponentially large state space. Every pixel can take one of several values. Luckily, the probability distribution of pixels is extremely peaked. The supermajority of pixel settings are TV static that never occurs, and thermal noise that doesn’t matter for the NNs task. One way to talk about this highly peaked pixel distribution may be to describe it as a sum of a very large number of sparse generators. The model then reasons about this distribution by compressing the many sparse generators into a small set of pretty non-sparse, highly compositional variables. For example, many images contain one or a few brown branchy structures of a certain kind, which come in myriad variations. The model summarises the presence or absence of any of these many sparse generators with the state of the variable “tree”, which tracks how much the input is “like a tree”.If the model has a variable “tree” and a variable “size”, the myriad brown, branchy structures in the data might, for example, show up as sparsely encoded vectors in a two-dimensional (“tree”,“size”) manifold. If you point a SAE at that manifold, you may get out sparse activations like “bush” (mid tree, low size) “house” (low tree, high size), “fir” (high tree, high size). If you increase the dictionary size, you might start getting more fine-grained sparse data generators. E.g. “Checkerberry bush” and “Honeyberry bush” might show up as separate, because they have different sizes.Humans, I expect, work similarly. So the human-like abstractions the model may or may not be thinking in and that we are searching for will not come in the form of sparse generators of layer activations, because human abstractions are the summary variables you would be using to compress these sparse generators. They are the type-of-thing you use to encode a sparse world, not the type-of-thing being encoded. That our SAE is showing us some activations that correlate with information in the input humans regard as meaningful just tells us that the data contains sparse generators humans have conceptual descriptions for, not that the algorithms of the network themselves are encoding the sparse generators using these same human conceptual descriptions. We know it hasn’t thrown away the information needed to compute that there was a bush in the image, but we don’t know it is thinking in bush. It probably isn’t, else bush would not be sparse with respect to the other summary statistics in the layer, and our SAE wouldn’t have found it.
The causal graph is large in general, but IMO that’s just an unavoidable property of models and superposition.
This is a discussion that would need to be its own post, but I think superposition is basically not real and a confused concept. Leaving that aside, the vanilla reading of this claim also seems kind of obviously false for many models, otherwise optimising them in inference through e.g. low rank approximation of weight matrices would never work. You are throwing away at least one floating point number worth of description bits there.
I’m confused by why you don’t consider “only a few neurons being non-zero” to be a “low dimensional summary of the relevant information in the layer”
A low-dimensional summary of a variable vector f of size n is a fixed set of random variables d<n that suffice to summarise the state of f. To summarise the state of f using the activations in an SAE dictionary, I have to describe the state of more than n variables. That these variables are sparse may sometimes let me define an encoding scheme for describing them that takes less than d<n variables, but that just corresponds to undoing the autoencoding and then performing some other compression.
SAEs are almost the opposite of the principle John is advocating for here. They deliver sparsity in the sense that the dictionary you get only has a few neurons not be in the zero state at the same time, they do not deliver sparsity in the sense of a low dimensional summary of the relevant information in the layer, or whatever other causal cut you deploy them on. Instead, the dimensionality of the representation gets blown up to be even larger.