AI notkilleveryoneism researcher, focused on interpretability.
Personal account, opinions are my own.
I have signed no contracts or agreements whose existence I cannot mention.
AI notkilleveryoneism researcher, focused on interpretability.
Personal account, opinions are my own.
I have signed no contracts or agreements whose existence I cannot mention.
I suspect language model in-context learning[1] ‘approximates Solomonoff induction’ in the vague sense that it is a pattern matching thingy navigating a search space somewhat similar in character to the space of possible computer programs, consisting of inputs/parameters for some very universal, Turing-complete-ish computational architecture the lm expresses its guesses for patterns in, looking for a pattern that matches the data.
The way they navigate this search space is totally different from SI, which just checks every single point in its search space of UTM programs. But the geometry of the space is similar to the geometry of the space of UTM programs, with properties like simpler hypotheses corresponding to exponentially more points in the space.
So, even if the language models’ in-context learning algorithm was kind of maximally stupid, and literally just guessed random points in the search space until it found a good match to the data, we’d expect its outputs to somewhat match up with the universal distribution, just because they’re both ≈ uniformly random samples from a space of inputs to Turing-complete-ish computational architectures.
So, to the extent that these experimental results actually hold up[2], I think the main thing they’d be telling us is that the ‘architecture’ or ‘language’ the lm expresses its in-context guesses in is highly expressive, with a computational universality similar to that of UTMs and many neural network architectures.
Arguably, the later may be a special case of the former with an appropriate choice of universal Turing machine (UTM), but I find this perspective to be a bit of a stretch. At the very least I expect LLM ICL to be similar to a universal distribution conditioned on some background information.
What’s even the difference between these propositions? Any UTM can be expressed in another UTM as a bit string of prior knowledge to condition on, and I’d intuitively expect the reverse to hold as well, though I don’t actually know that for sure.
You seem to be equating superposition and polysemanticity here, but they’re not the same thing.
In other words, will the AGI actually want you to push the button? Or would it want some random weird thing because inner alignment is hard?
My answer is: yes, it would want you to push the button, at least if we’re talking about brain-like AGI, and if you set things up correctly.
Again, getting a brain-like AGI addicted to a reward button is a lot like getting a human or animal hooked on an addictive drug.
Humans addicted to drugs often exhibit weird meta-preferences like ‘I want to stop wanting the drug’, or ‘I want to find an even better kind of drug’.
For this reason, I am not at all confident that a smart thing exposed to the button would later generalise to coherent, super-smart thing that wants the button to be pressed. Maybe it perceived the circuits in it that bound to the button reward as foreign to the rest of its goals, and worked to remove them. Maybe the button binding generalised in a strange way.
‘Seek to directly inhabit the cognitive state caused by the button press’, ‘along an axis of cognitive states associated with button presses of various strength, seek to walk to a far end that does not actually correspond to any kind of button press ’, ‘make the world have a shape related to generalisations of ideas that tended to come up whenever the button was pressed’ and just generally ‘maximise a utility function made up of algorithmically simple combinations of button-related and pre-button-training-reward-related abstractions’ all seem like goals I could imagine a cognitively enhanced human button addict generalising toward. So I am not confident the AGI would generalise to wanting the button to be pushed either, not in the long term.
Thank you. Do you know anyone who claims to have observed it?
If terminal lucidity is a real phenomenon, information lost to dementia could still be recoverable in principle. So, cryo-preserving people suffering from dementia for later mind uploading could still work sometimes.
I just heard about terminal lucidity for the first time from Janus:
If your loved one is suffering from (even late-stage) dementia, it’s likely that the information of their mind isn’t lost, just inaccessible until a cure is found.
Sign them up for cryonics.
This seems pretty important if true. I’d previously thought that if a loved one came down with Alzheimer’s, that was likely the end for them in this branch of the world[1], even with cryonics. I’d planned to set up some form of assisted suicide for myself if I was ever diagnosed, to get frozen before my brain got damaged too much.
Skimming the Wikipedia article and the first page of Google results, the documentation we have of terminal lucidity doesn’t seem great. But it tentatively looks to me like it’s probably a real thing at least in some form? Though I guess with the relative rarity of clearly documented cases, it might actually only work for some specific neurological disorders. I find it somewhat hard to imagine how something like this could work with a case of severe Alzheimer’s. Doesn’t that literally atrophy your brain?
This is very much not my wheelhouse though. I’d appreciate other people’s opinions, especially if they know something about this area of research.
It seems maybe possible in physical principle to bring back even minds lost to thermodynamic chaos. But that seems like an engineering undertaking so utterly massive I’m not sure even a mature civilisation controlling most of the lightcone could pull it off.
I agree it’s not a valid argument. I’m not sure about ‘dishonest’ though. They could just be genuinely confused about this. I was surprised how many people in machine learning seem to think the universal approximation theorem explains why deep learning works.
Anecdotally, the effect of LLMs on my workflow hasn’t been very large.
At a moderate P(doom), say under 25%, from a selfish perspective it makes sense to accelerate AI if it increases the chance that you get to live forever, even if it increases your risk of dying. I have heard from some people that this is their motivation.
If this is you: Please just sign up for cryonics. It’s a much better immortality gambit than rushing for ASI.
I like AE Studios. They seem to genuinely care about AI not killing everyone, and have been willing to actually back original research ideas that don’t fit into existing paradigms.
Side note:
Previous posts have been met with great reception by the likes of Eliezer Yudkowsky and Emmett Shear, so we’re up to something good.
This might be a joke, but just in case it’s not: I don’t think you should reason about your own alignment research agenda like this. I think Eliezer would probably be the first person to tell you that.
But they’d be too unchanged: the “afraid of mice” circuit would still be checking for “grey and big and mammal and …” as the finetune dataset included no facts about animal fears. While some newer circuits formed during fine tuning would be checking for “grey and big and mammal and … and high-scrabble-scoring”. Any interpretability tool that told you that “grey and big and mammal and …” was “elephant” in the first model is now going to have difficulty representing the situation.
Thank you, this is a good example of a type-of-thing to watch out for in circuit interpretation. I had not thought of this before. I agree that an interpretability tool that rounded those two circuits off to taking in the ‘same’ feature would be a bad interpretability tool. It should just show you that those two circuits exist, and have some one dimensional features they care about, and those features are related but non-trivially distinct.
But this is not at all unique to the sort of model used in the counterexample. A ‘normal’ model can still have one embedding direction for elephant at one point, used by a circuit , then in fine tuning switch to a slightly different embedding direction . Maybe it learned more features in fine tuning, some of those features are correlated with elephants and ended up a bit too close in cosine similarity to , and so interference can be lowered my moving the embedding around a bit. A circuit learned in fine tuning would then be reading from this and not match which is still reading in . You might argue that will surely want to adjust to start using as well to lower the loss, but that would seem to apply equally well to your example. So I don’t see how this is showing that the model used in the original counterexample has no notion of an elephant in a sense that does not also apply to the sort of models people might tend to imagine when they think in the conventional SDL paradigm.
EDIT: On a second read, I think I misunderstood you here. You seem to think the crucial difference is that the delta between and is mostly ‘unstructured’, whereas the difference between “grey and big and mammal and …” and “grey and big and mammal and … and high-scrabble-scoring” is structured. I don’t see why that should matter though. So long as our hypothetical interpretability tool is precise enough to notice the size of the discrepancy between those features and not throw them into the same pot, we should be fine. For that, it wouldn’t seem to me to really matter much whether the discrepancy is ‘meaningful’ to the model or not.
I’m with @chanind: If elephant is fully represented by a sum of its attributes, then it’s quite reasonable to say that the model has no fundamental notion of an elephant in that representation.
...
This is not a load-bearing detail of the example. If you like, you can instead imagine a model that embeds 1000 animals in an e.g. 50-dimensional subspace, with a 50 dimensional sub-sub-space where the embedding directions correspond to 50 attributes, and a 50 dimensional sub-sub-space where embeddings are just random.
This should still get you basically the same issues the original example did I think? For any dictionary decomposition of the activations you pick, some of the circuits will end up looking like a horrible mess, even though they’re secretly taking in a very low-rank subspace of the activations that’d make sense to us if we looked at it. I should probably double check that when I’m more awake though.[1]
I think the central issue here is mostly just having some kind of non-random, ‘meaningful’ feature embedding geometry that the circuits care about, instead of random feature embeddings.
EDIT: I am now more awake. I still think this is right.
The kind of ‘alignment technique’ that successfully points a dumb model in the rough direction of doing the task you want in early training does not necessarily straightforwardly connect to the kind of ‘alignment technique’ that will keep a model pointed quite precisely in the direction you want after it gets smart and self-reflective.
For a maybe not-so-great example, human RL reward signals in the brain used to successfully train and aim human cognition from infancy to point at reproductive fitness. Before the distributional shift, our brains usually neither got completely stuck in reward-hack loops, nor used their cognitive labour for something completely unrelated to reproductive fitness. After the distributional shift, our brains still don’t get stuck in reward-hack loops that much and we successfully train to intelligent adulthood. But the alignment with reproductive fitness is gone, or at least far weaker.
How much money would you guess was lost on this?
Yes.
Technically you didn’t specify that can’t be an arbitrary function, so you’d be able to reconstruct activations combining different bases, but it’d be horribly convoluted in practice.
I wouldn’t even be too fussed about ‘horribly convoluted’ here. I’m saying it’s worse than that. We would still have a problem even if we allowed ourselves arbitrary encoder functions to define the activations in the dictionary and magically knew which ones to pick.
The problem here isn’t that we can’t make a dictionary that includes all the feature directions as dictionary elements. We can do that. For example, while we can’t write
because those sums each already equal on their own, we can write
.
The problem is instead that we can’t make a dictionary that has the feature activations as the coefficients in the dictionary. This is bad because it means our dictionary activations cannot equal the scalar variables the model’s own circuits actually care about. They cannot equal the ‘features of the model’ in the sense defined at the start, the scalar features comprising its ontology. As a result, if we were to look at a causal graph of the model, using the half-size dictionary feature activations we picked as the graph nodes, a circuit taking in the feature through a linear read-off along the direction would have edges in our graph connecting it to both the elephant direction, making up about 50% of the total contribution, and the fifty attribute directions, making up the remaining 50%. Same the other way around, any circuit reading in even a single attribute feature will have edges connecting to all of the animal features[1], making up of the total contribution. It’s the worst of both worlds. Every circuit looks like a mess now.
Since the animals are sparse, in practice this usually means edges to a small set of different animals for every data point. Whichever ones happen to be active at the time.
E.g. it’s not possible to represent an elephant with any arbitrary combination of attributes, as the attributes themselves are what defines the elephant direction.
You can’t represent elephants along with arbitrary combinations of attributes. You can’t do that in a scheme where feature directions are fully random with no geometry either though. There, only a small number of features can have non-zero values at the same time, so you still only get non-zero attribute features at once maximum.[1]
We would want the dictionary to learn the attributes, not arbitrary combinations of attributes, since these are the true “base units” that can vary freely.
You can call them the “base units” if you like. But that won’t change the fact that some directions in the space spanned by those “base units” are special, with associated circuits that care about those directions in particular, and understanding or even recognising those circuits in a causal graph made of the “base units” will be pretty darned hard. For the same reason trying to understand the network in the neuron basis is hard.
Put another way, there’s no way to represent an “elephant” in this scheme without also attaching attributes to it.
Yes.
Likewise, it’s not possible to differentiate between an elephant with the set of attributes x y and z and a rabbit with identical attributes x y and z, since the sum of attributes are what you’re calling an elephant or rabbit.
Not quite. You cannot specify a rabbit and simultaneously specify the rabbit having arbitrary numerical attribute values for attributes differing from normal rabbits. You can have a rabbit, and some attributes treated as sparse boolean-ish features at the same time. E.g. works. Circuits downstream that store facts about rabbits will still be triggered by this . Circuits downstream that do something with attribute will be reading in an -attribute value of plus the -coefficient of rabbits.
A consequence of this is that ‘cute rabbit’ is a bit cuter than either ‘cute’ or ‘rabbit’ on their own. But that doesn’t seem particularly strange to me. Associations in my own mind sure seem to work like that.
Less, if you want to be able to perform computation in superposition.
Similarly, for people wanting to argue from the other direction, who might think a low current valuation is case-closed evidence against their success chances
To be clear: I think the investors would be wrong to think that AGI/ASI soon-ish isn’t pretty likely.
OpenAI’s valuation is very much reliant on being on a path to AGI in the not-too-distant future.
Really? I’m mostly ignorant on such matters, but I’d thought that their valuation seemed comically low compared to what I’d expect if their investors thought that OpenAI was likely to create anything close to a general superhuman AI system in the near future.[1] I considered this evidence that they think all the AGI/ASI talk is just marketing.
Well ok, if they actually thought OpenAI would create superintelligence as I think of it, their valuation would plummet because giving people money to kill you with is dumb. But there’s this space in between total obliviousness and alarm, occupied by a few actually earnest AI optimists. And, it seems to me, not occupied by the big OpenAI investors.
If I understand correctly, it sounds like you’re saying there is a “label” direction for each animal that’s separate from each of the attributes.
No, the animal vectors are all fully spanned by the fifty attribute features.
I’m confused why a dictionary that consists of a feature direction for each attribute and each animal label can’t explain these activations? These activations are just a (sparse) sum of these respective features, which are an animal label and a set of a few attributes, and all of these are (mostly) mutually orthogonal.
The animal features are sparse. The attribute features are not sparse.[1]
In this sense the activations are just the sum of the various elements of the dictionary multiplied by a magnitude, so it seems like you should be able to explain these activations using dictionary learning.
The magnitudes in a dictionary seeking to decompose the activation vector into these 1050 features will not be able to match the actual magnitudes of the features as seen by linear probes and the network’s own circuits.
Is the idea that the 1000 animals and 50 attributes form an overcomplete basis, therefore you can come up with infinite ways to span the space using these basis components?
No, that is not the idea.
Relative to the animal features at least. They could still be sparse relative to the rest of the network if this 50-dimensional animal subspace is rarely used.
I guess I wouldn’t expect UTM switching to be able to express any conditioning, that wouldn’t make sense since conditioning can exclude TMs and UTMs can all express any TM. But that doesn’t strike me as the sort of conditioning prior knowledge of the internet would impose?
Actually, now that I think about it, I guess it could be.