So I know it’s beside the point of your post, and by no means the core thesis, but I can’t help but notice that in your prologue you write this:
“A serious, believable AI alignment agenda would be grounded in a deep mechanistic understanding of both intelligence and human values. Its masters of mind engineering would understand how every part of the human brain works and how the parts fit together to comprise what their ignorant predecessors would have thought of as a person. They would see the cognitive work done by each part and know how to write code that accomplishes the same work in pure form.”
I have to admit this bugs me. It bugs me specifically because it triggers my pet peeve of “if only we had done the previous AI paradigm better, we wouldn’t be in this mess.” The reason why this bugs me is it tells me that the speaker, the writer, the author has not really learned the core lessons of deep learning. They have not really gotten it. So I’m going to yap into my phone and try to explain — probably not for the last time; I’d like to hope it’s the last time, but I know better, I’ll probably have to explain this over and over.
I want to try to explain why I think this is just not a good mindset to be in, not a good way to think about things, and in fact why it focuses you on possibilities and solutions that do not exist. More importantly, it means you’ve failed to grasp important dimensions of alignment as a problem, because you’ve failed to grasp important dimensions of AI as a field.
I think we can separate AI into multiple eras and multiple paradigms. If you look at these paradigms, there’s a lot of discussion about AI where the warrant for taking a particular concept seriously is kind of buried under old lore that, if you then examine it, makes the position much more absurd or much less easily justifiable than if you were just encountering it fresh — never having been suggested to you by certain pieces of evidence at certain times.
I would say that AI as a concept gets started in the 50s with the MIT AI lab. The very first AI paradigm is just fiddling around. There is no paradigm. The early definition of AI would include many things that we would now just consider software — compilers, for example, were at one point considered AI research. Basically any form of automation of human reasoning or cognitive labor was considered AI. That’s a very broad definition, and it lasts for a while. My recollection — and I’m just yapping into my phone rather than consulting a book — is that this lasts maybe until the late 60s, early 70s, when you get the first real AI paradigm: grammar-based AI.
It’s also important to remember how naive the early AI pioneers were. There’s the famous statement from the Dartmouth conference where they say something like, “we think if you put a handful of dedicated students on this problem, we’ll have this whole AGI thing solved in six months.” Just wildly, naively optimistic, and for quite a number of years. You can find interviews from the 60s where AI researchers believe they’re going to have what we would now basically consider AGI within a single-digit number of years. It in fact contributed to the first wave of major automation panic in the 60s — but that’s a different subject and I’d have to do a bunch of research to really do it justice.
The point is that it took time to be disabused of the notion that we were going to have AGI in a couple of years because we had the computer. Why did people ever think this in the first place? You look at all the computing power needed to do deep learning, you look at the computational requirements to run even a good compiler, and these computers back then were tiny — literally kilobytes of RAM, minuscule CPU power, minuscule memory. How could they ever think they were on the verge of AGI?
The answer is that their reasoning went: the kinds of computations the computer can be programmed to do — math problems, calculus problems — are the hardest human cognitive abilities. The things the computer does so easily are the hardest things for a human to do. Therefore, the reasoning went, if we’re already starting from a baseline of the hardest things a human can do, it should be very easy to get to the easiest things — like walking.
And this is where the naive wild over-optimism comes from. What we eventually learned was that walking is very hard. Even piloting a little insect body is very hard. Replicating the behavior of an insect — the pathfinding, the proprioceptive awareness, the environmental awareness of an insect — is quite difficult. Especially on that kind of hardware, it’s basically impossible.
Once people started to realize this, they settled into the first real AI paradigm: grammar-based AI. What people figured was that you have these compilers — the Fortran compiler, the Lisp interpreter had been invented by then, along with some elaborations. Compilers seem to be capable of doing complex cognitive work. They can unroll a loop, they can do these intricate programming tasks that previously required a dedicated person to hand-specify all the behaviors. A compiler is capable of fairly complex translation between a high-level program and the detailed behaviors the machine should do to implement that behavior efficiently — behaviors that previously would have had to be hand-specified by a programmer.
For anyone unfamiliar with compilers: the way a compiler basically works, as a vast oversimplification, is that it has a series of rules in what’s called a context-free grammar. The thing that distinguishes a context-free grammar from a natural grammar is that you are never reliant on context outside the statement itself for the meaning of the statement — or at least, any context you need, like a variable name, is formally available to the compiler. Statements in a context-free grammar have no ambiguity; there is always an unambiguous, final string you can arrive at. You never have to decide between two ambiguous interpretations based on context.
The thought process was: we have these compilers, and they seem capable of using a series of formal language steps to take high-level intentions from a person and translate them into behaviors. They even have, at least the appearance of, autonomy. Compilers are capable of thinking of ways to express the behavior of high-level code that the programmer might not even have thought of. There’s a sense of genuine cognitive autonomy from the programmer — you’re able to get out more than you’re putting in. I think there’s a metaphor like “some brains are like fishing, you put one idea in and you get two ideas out.” That seems like it was kind of the core intuition behind formal grammar AI: that a compiler follows individually understandable rules and yet produces behaviors that express what the programmer meant through ways the programmer would not have thought of themselves. You start to feel the machine becoming autonomous, which is very attractive.
This also lined up with the theories of thinkers like Noam Chomsky. The entire concept of the context-free grammar as distinct from the natural grammar is, my understanding is, a Chomsky concept. So it’s really the Chomsky era of AI. This is the era of systems like EURISKO. You also have computer algebra systems — Maxima being the classic example. A computer algebra system is the kind of thing that now we’d just consider software, but at the time it was considered AI.
This is one of the things John McCarthy famously complained about when he said, “if it starts to work, they stop calling it AI.” When they were developing systems like Maxima, those were considered AI. And what they were, were systems where you could give it an algebra expression and it would do the cognitive labor of reducing it to its final form using a series of production rules — which is everything a compiler does, as I was trying to explain. A compiler starts with a statement expressed in a formal grammar, applies a series of production rules — which you can think of as heuristics — and the grammar specification basically tells you: given this state of the expression, what is the next state I should transition to? You go through any number of steps until you reach a terminal, a state from which there are no more production rules to apply. It’s the final answer. When you’re doing algebra and you take a complex expression and reduce it to its simplest form using a series of steps, that’s basically what this is: applying production rules within a formal grammar to reduce it to a terminal state.
I’m not saying these systems were useless, especially the more practically focused ones like Maxima. But in terms of delivering autonomous, interesting thinking AI, they’re pretty lackluster. I think the closest we got, arguably, was EURISKO, and I’m kind of inclined to think that EURISKO is sort of fake. I don’t really believe most of that story.
The formal grammar paradigm has a couple of problems. I think the core problem is articulated fairly well by Allen Newell in the final lecture he gives before he dies. The core problem is something like: let’s ignore the problem of the production rules for a minute. Let’s say your production rules are perfect — you have a perfect set of problem-solving heuristics that can take you from a starting symbolic program state to a final problem solution. It doesn’t matter how brilliant your problem-solving heuristics are if you can’t even start the problem off in the right state.
To give a concrete example I use all the time: you want to go downstairs and fetch a jug of milk from the fridge. This is a task that essentially any person can do. Even people who score as mentally disabled on an IQ test can generally go down the stairs and grab a jug of milk from the fridge. It’s so basic we don’t even think of it as difficult. But then think about how you’d get a robot to do that autonomously — not programming it step by step to do one exact mechanical motion, but saying “hey, go grab me a jug of milk” and having it walk down the stairs, walk to the fridge, open the fridge, recognize the milk jug, grab it, and walk back. It’s completely intractable. It’s not just that the problem-solving heuristics can’t do it — the formal grammar approach of taking a formal symbol set and applying transformations to it cannot do this thing even in principle. There is no humanly conceivable set of problem-solving heuristics that is going to let you, starting from a raw bitmap of a room or hallway or stairs, autonomously identify the relevant features of the problem at each stage and accomplish the task. Not happening. And it’s not that it’s not happening because you’re not good enough. It’s not happening because the whole paradigm has no way to even conceive of how it would do this.
I could go into all kinds of reasons why problem-solving heuristics based on a formal grammar are just going to be intractable, but I do think Allen Newell has it exactly right. The fundamental problem is not just that this thing isn’t good enough — it really cannot be good enough even in principle. But even if you have the production rules part perfect, your paradigm still doesn’t even have a way in principle to do this extremely important thing that you would always want your AI to do and that humans empirically can do. So you can’t just appeal to its fundamental impossibility; clearly, there is a way to do this.
I really like the way Allen Newell phrases this when he says that the purpose of cognitive architecture as a field is to try to answer the question: how can the human mind occur in the physical universe? He threw that out as an articulation of the core question in his final lecture. I think it’s brilliant. We can now ask a different but closely related question: how can GPT occur in the physical universe? The difference is that this question is much more tractable.
So formal grammar AI didn’t work, and yet it was pursued for a very long time — arguably even as recently as the 90s, there were people genuinely still working on it. It never really died culturally or academically. I think the reason it never died academically is that it’s just aesthetically satisfying. Looking back on it, I think Dreyfus comparing it to alchemy was completely appropriate. It’s basically the Philosopher’s Stone — this very nice feel-good thing that it would be really cool if you could do. It’s an appealing myth, an attractive object in latent space that draws people towards it but from which they can’t escape. It’s an illusion. I honestly do not think formal grammar-based AI is a thing permitted by our universe to exist, at least not in the kind of way its creators envisioned it.
So what else can you do? The next paradigm is something like Victor Glushkov’s genetic algorithms. The idea there is probably quite similar to deep learning, but deep learning implements it in a way that is actually practically implementable. The way genetic algorithms are supposed to work is that you implement a cost function — what we today call a loss function — and you’re going to use random mutations on some discrete symbolic representation of the problem or solution. The cost function tells you if you are getting closer or farther from the solution, which means your problem needs to be at least differentiable in the sense that there’s a clear, objective way to score the performance of a solution and the scoring can be granular enough that you can know if you’re getting closer or farther based on small changes.
The first big problem you run into is that random mutations and discrete programs do not mix together well. How do you make a program representation where you can do these kinds of mutations? You need mutations that have a regular structure so they don’t just destroy your programs, or you need a form of program representation that works well under the presence of random mutations. That’s just really hard to do with discrete programs. I don’t think anyone ever really cracked it.
The other problem, which is related, is the credit assignment problem. You know, one good idea is: what if we constrain our mutations to the parts of the program that are not working? If we know roughly where the error is, we can constrain our mutations to that part instead of breaking random stuff that is functioning. That’s a great idea and it will definitely narrow your search space. But how do you do that? Unless you have some way to take the cost function and calculate the gradient of change with respect to the program representation, there’s no way to find the part of the program you need to modify. So what you end up doing is random mutations, and the search space is just way too wide.
Based on the intractability of this particular approach, a lot of people concluded that AGI was just not possible. There used to be a very common story that went something like: we can’t do AGI because human intelligence is the product of a huge program search undergone by evolution, and the way evolution did it was by throwing the equivalent of zettaflops of CPU processing power at it — amounts of compute we’ll just never have access to. Therefore, we’re not going to have AGI anytime this century, if ever, because you would basically have to recapitulate all of evolution to get something comparable to a human brain. And we know this because we tried the Glushkov thing and it did not work. I think you can see how that prediction turned out. But it was plausible at the time.
The other thing people started doing that was actually quite practical was expert systems. The way an expert system works is basically that you have a knowledge base and a decision tree. Where you get the decision tree is you take an actual human expert who knows how to do a task — say, flying an airplane — and you formally represent the problem state in a way legible to the decision tree. You just copy what a human would do at each state. These things often didn’t generalize very well, but if you did enough hours of human instruction and put the system into enough situations with a human instructor and recorded enough data and put it into a large enough decision tree with a large enough state space and had even a slight compressive mechanism for generalization — this was enough to do certain tasks, or at least start to approximate them, even if it would then catastrophically fail in an unanticipated situation.
And the thing is, this reminds one a lot of deep learning. I’m not saying deep learning is literally just a giant decision tree — I think the generalization properties of deep learning are too good for that. But deep learning does in fact have bizarre catastrophic failures out of distribution and is very reliant on having training examples for a particular thing. This story sounds very familiar. The expert system was also famously inscrutable. You’d make one, and you could ask how it accomplishes a task, and the interpretability chain would look like: at this state it does this, at this state it does this, at this state it does this. And if you want to know why it does that? Good luck. This story, again, sounds very familiar.
So then you have the next paradigm — expert systems are maybe the 90s — and then in the 2000s you get early statistical learning: Solomonoff-type things, boosting. Boosting trees is a clever method to take weak classifiers and combine them into stronger classifiers. If you throw enough tiny little classifiers together with uncorrelated errors, you get a strong enough signal to make decisions and do classification. There are certain problems you can do fairly well with boosting.
And then there’s 2012, you get AlexNet.
There’s a talk I really like from Alan Kay called “Software and Scaling” where he points out that if you take all the code for a modern desktop system — say, Microsoft Windows and Microsoft Office — that’s something like 400 million lines of code. If you stacked all that code as printed paper, it would be as tall as the Empire State Building. The provocative question he asks is: do you really need all that code just to specify Microsoft Word and Microsoft Windows? That seems like a lot of code for not that much functionality.
And I agree with him. Alan Kay’s theory for why it requires so much code is that it’s essentially malpractice on the part of software engineers — that software engineers work with such terrible paradigms, their abstractions are so bad, that 400 million lines is just what it takes to express it with their poor understanding. If we had a better ontology, a better kind of abstraction, we could express it much more compactly.
I agreed, and for a long time I just accepted this as the case — this was also my hypothesis. What I finally realized after looking at deep learning was that I was wrong.
Here’s the thing about something like Microsoft Office. Alan Kay will always complain that he had word processing and this and that and the other thing in some 50,000 or 100,000 lines of code — orders of magnitude less code. And here’s the thing: no, he didn’t. I’m quite certain that if you look into the details, what Alan Kay wrote was a system. The way it got its compactness was by asking the user to do certain things — you will format your document like this, when you want to do this kind of thing you will do this, you may only use this feature in these circumstances. What Alan Kay’s software expected from the user was that they would be willing to learn and master a system and derive a principled understanding of when they are and are not allowed to do things based on the rules of the system. Those rules are what allow the system to be so compact.
You can see this in TeX, for example. The original TeX typesetting system can do a great deal of what Microsoft Word can do. It’s somewhere between 15,000 and 150,000 lines of code — don’t quote me on that, but orders of magnitude less than Microsoft Word. And it can do all this stuff: professional quality typesetting, documents ready to be published as a math textbook or professional academic book, arguably better than anything else of its kind at the time. And the way TeX achieves this quality is by being a system. TeX has rules. Fussy rules. TeX demands that you, the user, learn how to format your document, how to make your document conform to what TeX needs as a system.
Here’s the thing: users hate that. Despise it. Users hate systems. The last thing users want is to learn the rules of some system and make their work conform to it.
The reason why Microsoft Word is so many lines of code and so much work is not malpractice — it would only be malpractice if your goal was to make a system. Alan Kay is right that if your goal is to make a system and you wind up with Microsoft Word, you are a terrible software engineer. But he’s simply mistaken about what the purpose of something like Microsoft Word is. The purpose is to be a virtual reality — a simulacrum of an 80s desk job. The purpose is to not learn a system. Microsoft Word tries to be as flexible as possible. You can put thoughts wherever you want, use any kind of formatting, do any kind of whatever, at any point in the program. It goes out of its way to avoid modes. If you want to insert a spreadsheet into a Word document anywhere, Microsoft Word says “yeah, just do it.”
It’s not a system. It’s a simulacrum of an 80s desk job, and because of that the code bloat is immense, because what it actually has to do is try to capture all the possible behaviors in every context that you could theoretically do with a piece of paper. Microsoft Word and PDF formats are extremely bloated, incomprehensible, and basically insane. The open Microsoft Word document specification is basically just a dump of the internal structures the Microsoft Word software uses to represent a document, which are of course insane — because Microsoft Word is not a system. The implied data structure is schizophrenic: it’s a mishmash of wrapped pieces of media inside wrapped pieces of media, with properties, and they’re recursive, and they can contain other ones. This is not a system.
For that reason, you wind up with 400 million lines of code. And what you’ll notice about 400 million lines of code is — hey, that’s about the size of the smallest GPT models. You know, 400 million parameters. If you were maximally efficient with your representation, if you could specify it in terms of the behavior of all the rest of the program and compress a line of code down on average to about one floating point number, you wind up with about the size of a small GPT-2 type network. I don’t think that’s an accident. I think these things wind up the size that they are for very similar reasons, because they have to capture this endless library of possible behaviors that are unbounded in complexity and legion in number.
I think that’s a necessary feature of an AI system, not an incidental one. I don’t think there is a clean, compressed, crisp representation. Or at least, to the extent there is a clean crisp representation of the underlying mechanics, I think that clean crisp implementation is: gradient search over an architecture that implements a predictive objective. That’s it. Because the innards are just this giant series of ad hoc rules, pieces of lore and knowledge and facts and statistics, integrated with the program logic in a way that’s intrinsically difficult to separate out, because you are modeling arbitrary behaviors in the environment and it just takes a lot of representation space to do that.
And if the expert system — just a decision tree and a database — winds up basically uninterpretable and inscrutable, you better believe that the 400-million-line Microsoft Office binary blob is too. Or the 400-million-parameter GPT-2 model that you get if you insist on making a simulacrum of the corpus of English text. These things have this level of complexity because it’s necessary complexity, and the relative uninterpretability comes from that complexity. They are inscrutable because they are giant libraries of ad hoc behaviors to model various phenomena.
Because most of the world is actually complication. This is another thing Alan Kay talks about — the complexity curve versus the complication curve. If you have physics brain, you model the world as being mostly fundamental complexity with low Kolmogorov complexity, and you expect some kind of hyperefficient Solomonoff induction procedure to work on it. But if you have biology brain or history brain, you realize that the complication curve of the outcomes implied by the rules of the cellular automaton that is our reality is vastly, vastly bigger than the fundamental underlying complexity of the basic rules of that automaton.
Another way to put this, if you’re skeptical: the actual program size of the universe is not just the standard model. It is the standard model plus the gigantic seed state after the Big Bang. If you think of it like that, you realize the size of this program is huge. And so it’s not surprising that the model you need to model it is huge, and that this model quickly becomes very difficult to interpret due to its complexity.
This also applies when you go back to thinking about distinct regions of the brain. When we were doing cognitive science, a very common approach was to take a series of ideas for modules — you have a module for memory, a module for motor actions or procedures, one for this, one for that — and wire them together into a schematic and say, “this is how cognition works.” This is the cognitive architecture approach, which reaches its zenith in something like the ACT-R model — where you have production rules that produce tokens, by the way. And if you’re influenced by this “regions of the brain” perspective, you are thinking in terms of grammar AI. Even if you say “no, no, I didn’t want to implement grammar AI, I want to implement it as a bunch of statistical learning models that produce motor tokens” — uh huh. Yeah, exactly. And let me guess, you’re going to hook up these modules like the cognitive architecture schematic? Well, buddy.
At the time we were doing cognitive architecture, the only thing we knew about intelligence was that humans have it. If we take the brain and look at natural injuries — we’re largely not willing to deliberately cause injuries just to learn what they do, but we can take natural lesions and say: a lesion here causes this capability to be disrupted, and one here is associated with these capabilities being disrupted. Therefore, this region must cause these capabilities. That’s a fair enough inference. But because your only known working example is this hugely complex thing —
Imagine if we had GPT as a black box and didn’t know anything about it. You could have some fMRI-style heat map of activations in GPT during different things it does, and you’d say, “oh, over here is animals, over here is this, over here is that.” Then you start knocking out parts and say, “ah, this region does this thing, and that region does that thing, and therefore these must be a series of parts that go together.” You would probably be very confused. This would probably not bring you any closer to understanding the actual generating function of GPT.
I get this suspicion when I think about the brain and its regions. Are they actually, meaningfully, like a parts list? Like a series of gears that go together to make the machine move? Or is it more like a very rough set of inductive biases that then convergently reaches that shape as it learns? I have no idea. I assume there must be some kind of architecture schematic, especially because there are formative periods — and formative periods imply an architecture, kind of like the latent diffusion model where you train a VQVAE and then train a model on top of it. Training multimodal encoders on top of single-modality encoders seems like the kind of thing you would do in a brain, so I can see something like that.
But just looking at the architecture of the brain — which you can do on Google Scholar — you learn, for example, about Wernicke’s area and Broca’s area. Wernicke’s area appears to be an encoder-decoder language model. If you look at the positioning of Wernicke’s area and what other parts of the brain are around it, you realize it seems to be perfectly positioned to take projections from the single and multimodal encoders in the other parts of the brain. So presumably Wernicke’s area would be a multimodal language encoding model that takes inputs from all the other modalities, and then sends the encoded idea to Broca’s area, which translates it into motor commands. It is a quite legible architecture, at least to me.
I think if you did actually understand it, you would basically understand each individual region in about as much detail as you understand a GPT model. You’d understand its objective, you’d understand how it feeds into other models. You wouldn’t really understand how it “works” beyond that, because the answer to that question is: like, not how things work. Things don’t — I don’t know how to explain to you. I don’t think there is like a master algorithm that these things learn. I don’t think there was some magic one weird trick that, if you could just pull it out of the network, would make it a thousand times more efficient. I don’t think that’s what’s going on.
The thing with latent diffusion, for example, is that it turns out to be very efficient to organize your diffusion model in the latent space of a different model and then learn to represent concepts in that pre-existing latent space. I would not be surprised if the brain uses that kind of trick all the time, and that the default is to train models in the latent space of another model. So it’s not just a CLIP — it’s a latent CLIP. You have raw inputs that get encoded, then a model that takes the encoded versions and does further processing to make a multimodal encoding, which is then passed on to some other network that eventually gets projected into Wernicke’s area, and so on.
The things that you would find if you took apart the brain and separated it into regions — if you look at fMRI studies on which we base claims about “a region for something” — often what’s being tested is something like a recognition test: if you show someone a face, what part of the brain lights up? And you test on maybe three things, and you say “oh, this part of the brain is associated with recognizing faces doing this and that, therefore this is the face-recognizing region.” You have to ask yourself: is it the face-recognizing region of the brain, or is recognizing faces just one of the three things anyone happened to test? It’s not like there are that many fMRI brain studies. There’s a limited number of investigations into what part of what is encoded where.
There’s a study out there where they show people Pokémon and find a particular region of the brain where Pokémon get encoded. And if you said, “ah yes, this is the Pokémon region, dedicated to Pokémon” — obviously there are no Pokémon in the ancestral environment, and obviously that would be imbecile reasoning. So there’s a level of skepticism you need when reading studies that say “this is the region of the brain dedicated to this.” Is it dedicated to that, or is that just one of the things it processes?
I think the brain is quite legible if you interpret it as a series of relatively general-purpose networks that are wired together to be trained in the latent space of other networks. It’s a fairly legible architecture if you interpret it that way, in my opinion.
And so. What I’m trying to say is: there is no royal road to understanding. There’s no magic. There’s no “ah yes, if we just had a superior science of how the brain really works” — nope. This is how it really works. The way it really, really works is: while you’re doing things, you have experiences, and these experiences are encoded in some kind of context window. I don’t know exactly how the brain’s context window works, but depending on how you want to calculate how many tokens the brain produces per second in the cognitive architecture sense, I personally choose to believe that the brain’s context window is somewhere between one to three days worth of experience. The last time I napkin-mapped it, it was something like 4.75 million tokens of context — maybe it was 7 million, I don’t remember the exact number, but I remember it was more tokens than Claude will process in a context, but a single-digit number of millions. At some point you’ll hit that threshold, and then you’ll be able to hold as many experiences in short-term memory as a human can.
Then the next thing you do: things that you don’t need right away, things that don’t need to be in context, you do compacting on. How does compacting work? Instead of just throwing out the stuff you don’t need, you kind of send it to the hippocampus to be sorted — either it gets tagged as high salience and you need to remember it, or it fades away on a fairly predictable curve, the classic forgetting curve. And that’s good enough to give you what feels like seamless recall of your day.
But the problem is, just like with GPT, this is not quite real learning. It’s in-context learning, but it’s not getting baked into the weights. It’s not getting fully integrated into the rest of your epistemology, the rest of your knowledge. This is an approach that doesn’t really fully scale. So while you’re asleep, you take those memories that have made it from short-term memory into the hippocampus, and you migrate them into long-term memory by training the cortex with them — training the prefrontal cortex.
And when you do this, it’s slow. We can actually watch this: we happen to know that the hippocampus will send the same memory over and over and over to learn all the crap from it. What that implies is that if you had to do this in real time, it would be unacceptably slow, in the same way that GPT weight updates are unacceptably slow during inference. The way you fix it is by amortizing — you schedule the updates for later, and you do some form of active learning to decide what things to offload from the hippocampus into long-term memory. There is no trick for fast learning. The same slow updates in GPT weights are the same slow updates in human weights. The trick is just that you don’t notice them because you’re mostly updating while you’re asleep. The things you do in the meantime are stopgaps — the human brain architecture equivalent of things like RAG, like vector retrieval.
The hippocampus, by the way, actually does something more complicated than simple vector retrieval. It’s closer to something like: you give the hippocampus a query, it takes your memories and synthesizes them into an implied future state, and then prompts the prefrontal cortex with it in order to get the prefrontal cortex to do something like tree search to find a path that moves the agent to that outcome. This prompt also just happens to come with the relevant memories you queried for.
And if you ask what algorithm the hippocampus implements — we actually happen to know this one. The hippocampus is trained through next-token prediction, like GPT. It is trained using dopamine reward tagging, and based on the strength of the reward tagging and emotional tagging in memories, it learns to predict future reward tokens in streams of experience. Interestingly, my understanding is that the hippocampus is one of the only networks trained with next-token prediction.
The longer I think about it, the more it makes sense. When I was thinking about how you’d make a memory system with good sparse indexing, I kept concluding that realistically you need the hippocampus to perform some kind of generally intelligent behavior in order to make a really good index — it needs contextual intelligence to understand “this is the kind of thing you would recall later.” When I thought about how to do that with an AI agent, I just ended up concluding that the easiest thing would be to have GPT write tags for the memories, because you just want to apply your full general intelligence to it. Well, if that’s just the easiest way to do it, it would make total sense for the hippocampus to be trained with next-token prediction.
Does that help you with AI alignment? Not really, not very much. But if you were to take apart the other regions of the brain, it’s like: mono-modal audio encoder. You look at something like the posterior superior temporal sulcus, and if you read about it and look at what gets damaged when it’s lesioned, what it’s hooked up to, what other regions it projects into and what projects into it — you can really easily point at these and say, “oh, that’s a multimodal video encoder.” By the way, the video encoder in humans is one of the unique parts of the human brain. You have a very big prefrontal cortex and a seemingly unique video encoder. Other animals like rats seem to have an image encoder — something like a latent CLIP — but not a video encoder. Interesting to think about how that works.
Again, these parts are not like — look, I just don’t understand what you expect to find. Of course it’s made out of stuff. What else, how else would it work? Of course there’s a part where you have an encoder and then you train another network in the latent space of that model. Well, if that’s how you organize things — and of course that’s how you organize things, duh, that’s the most efficient way to organize a brain. The thing with latent diffusion is that it turns out to be very efficient to organize your diffusion model in the latent space of a different model. I would not be surprised if the brain uses that kind of trick all the time and that the default is to train models in the latent space of another model where possible.
So it’s not just a CLIP, it’s a latent CLIP. You have raw inputs, those get encoded, then you have a model that takes the encoded versions and does further processing to make a multimodal encoding, which is then passed on to some other network that eventually gets projected into Wernicke’s area, and so on. The things you would find if you took apart the brain into separate regions — I think it’s a quite legible architecture if you just interpret it as a series of relatively general-purpose networks wired together to be trained in the latent space of other networks.
And the trick is that there is no trick. The way “general intelligence” works is that you are a narrow intelligence with limited out-of-distribution generalization, and this is obscured from you by the fact that while you are asleep, your brain is rearranging itself to try to meet whatever challenges it thinks you’re going to face the next day.
This is why, for example, if you’re trying to learn a really motor-heavy action video game, like a really intense first-person shooter, and you’re drilling the button sequences over and over and it’s just not clicking — and then you go to sleep, do memory consolidation, wake up, and suddenly you’re nailing it. What’s actually going on is that the motor actions that were previously too slow, too conscious, not quite clicking as in-context learning — the brain said “this needs to be a real weight update” and prioritized moving those to the front of the queue. Now they’re actually in the prefrontal cortex as motor programs that can be executed immediately and are integrated into the rest of the intuitive motor knowledge. You’re not magically generalizing out of distribution. You updated your weights. You generalized out of distribution by updating the model. I know, incredible concept. But there it is.
EDIT: Viktor Glushkov apparently did not invent genetics algorithms, but early precursor work to them as an approach. And people act like LLM confabulations aren’t a thing humans do. :p
Great ramble, but I feel like adopting this thesis doesn’t make me feel any better about smarter-than-human AGI alignment. Rather, I would feel awful, because in your sketched-out world you just cannot realistically reach the level of understanding you would need to feel safe ceding the trump card of being the smartest kind of thing around. Safety is not implied if you really really take the Bitter Lesson to heart. (Not implying that your above comment says otherwise: as you suggest, the ramble is not cutting at Zack’s main thesis here.)
More directly to your point, though, we do sometimes extract the clean mathematical models embedded inside of an otherwise messy naturalistic neural network. Most striking to me is the days of the week group result: if you know how to look at the thing from the right angle, the clean mathematical structure apparently reveals itself. (Now admittedly, the whole rest of GPT-2 or whatever is a huge murky mess. So the stage of the science we’re groping towards at the moment is more like “we have a few clean mathematical models that really shine of individual phenomena in neural networks” than “we have anything like a clean grand unified theory.” But confusion is in the map, not in the territory, and all that, even if a particular science is extraordinarily difficult.)
But confusion is in the map, not in the territory,
Confusion can in fact be in the irreducible complexity and therefore in the territory. “It is not possible to represent the ‘organizing principle’ of this network in fewer than 500 million parameters, which do not fit into any English statement or even conceivably humanly readable series of English statements.”, Shannon entropy can be like that sometimes.
Rather, I would feel awful, because in your sketched-out world you just cannot realistically reach the level of understanding you would need to feel safe ceding the trump card of being the smartest kind of thing around.
I think there are achievable alignment paths that don’t flow through precise mechanistic interpretability. I should write about some of them. But also I don’t think what I’m saying precludes as you say having understanding of individual phenomena in the network, it’s mostly an argument against there being a way more legible way you could have done this if people had just listened to you, that is not probably not true and your ego has to let it go. You have to accept the constraints of the problem as they appear to present themselves.
Well, you don’t have to do anything but unless you have some kind of deep fundamental insight here your prior should be that successful alignment plans look more like replying on convergence properties than they do on aesthetically beautiful ‘clean room’ cognitive architecture designs. There might be some value in decomposing GPT into parts, but I would submit these parts are still going to form a system that is very difficult to predict all the downstream consequences of in the way I think people usually imply when they say these things. You know, that they want it to be like a rocket launch where we can know in principle what coordinate position X, Y, Z we will be in at time t. I think the kinds of properties we can guarantee will be more like “we wind up somewhere in this general region in a tractable amount of time so long as an act of god does not derail us”.
I think there are achievable alignment paths that don’t flow through precise mechanistic interpretability. I should write about some of them.
Please do! I am very interested in this sort of thinking. Is there preexisting work you know of that runs along the lines of what you think could work?
I basically agree with the intended point that general intelligence in a compute-limited world is necessarily complicated (and think that a lot of people are way too invested in trying to simplify the brain into the complexity of physics), but I do think you are overselling the similarities between deep learning and the brain, and in particular you are underselling the challenge of actually updating the model, mostly because unlike current AIs, humans can update their weights at least once a day always, and in particular there’s no training date cutoff after which the model isn’t updated anymore, and in practice human weight updates almost certainly have to be done all the time without a training and test separation, whereas current AIs do update their weights, but it lasts only a couple of months in training and then the weights are frozen and served to customers.
(For those in the know, this is basically what people mean when they talk about continual learning).
So while there are real similarities, there are also differences.
Because most of the world is actually complication. This is another thing Alan Kay talks about — the complexity curve versus the complication curve. If you have physics brain, you model the world as being mostly fundamental complexity with low Kolmogorov complexity, and you expect some kind of hyperefficient Solomonoff induction procedure to work on it. But if you have biology brain or history brain, you realize that the complication curve of the outcomes implied by the rules of the cellular automaton that is our reality is vastly, vastly bigger than the fundamental underlying complexity of the basic rules of that automaton.
Another way to put this, if you’re skeptical: the actual program size of the universe is not just the standard model. It is the standard model plus the gigantic seed state after the Big Bang. If you think of it like that, you realize the size of this program is huge. And so it’s not surprising that the model you need to model it is huge, and that this model quickly becomes very difficult to interpret due to its complexity.
I would slightly change this, and say that if you can’t brute-force simulate the universe based on it’s fundamental laws, you must take into account the seed, but otherwise a very good point that is unheeded by a lot of people (the change doesn’t matter for AI capabilities in the next 50-100 years, and it also doesn’t matter for AI alignment with p(0.9999999), but does matter from a long-term perspective on the future/longtermism.
Epistemic status: random thought I just had, but what if there kind of is. I think maybe dreaming is the “test” part of the training cycle: the newly updated weights run against outcome predictions supplied by parts of the system not currently being updated. The being-updated part tries to get desirable outcomes within the dream, and another network / region plays Dungeon Master, supplying scenario and outcomes for given actions. Test against synthetic test data, supplied by a partially adversarial network.
I feel like, if true, we’d expect to see some kind of failures to learn-from-sleep in habitual lucid dreamers? Or reduced efficacy, anyway? I wonder what happens in a learning setup which is using test performance to make meta training decisions, if you hack the test results to erroneously report greater-than-actual performance…? Are there people who do not dream at all (as distinguished from merely not remembering dreams)?
This model of “what even is a dream, anyway?” makes a lot more predictions/retrodictions than my old model of “dreams are just the qualia of neuronal sub populations coming back online as one wakes up”.
I disagree, and think your analogy to MS Word may be where the crux lies. We could only build MS word because it relies on a bunch of simple, repeated abstractions that keep cropping up (e.g. parsers, rope data structures etc.) in combination with a bunch of random, complex crud that is hard to memorize. The latter is what you’re pointing at, but that doesn’t mean there aren’t a load of simple, powerful abstractions underyling the whole thing which, if you understand them, you can get the program to do pretty arbitrary things. Most of the random high complexity stuff is only needed locally, and you can get away with just understanding the bulk structure of a chunk of the program and whatever bits of trivia you need to accomplish whatever changes you want to make to MS Word.
This is unlike the situation with LLMs, which we don’t have the ability to create by hand, or to seriously understand an arbitrary section of its functionality. Thoughmaaaaybe we could manage to engineer something like GPT-2 right now but I’d bet against that for GPT-3 onwards.
And the trick is that there is no trick. The way “general intelligence” works is that you are a narrow intelligence with limited out-of-distribution generalization, and this is obscured from you by the fact that while you are asleep, your brain is rearranging itself to try to meet whatever challenges it thinks you’re going to face the next day.
Would we really say that a human is a “narrow intelligence” when trying any new task until they sleep on it? I think the only thing that would meet the definition of “general intelligence” that this implies is something that generalize to all situations, no matter how foreign. By that definition, I’m not sure if general intelligence is possible.
Wow. Thanks a lot for that. Your depiction of brain architecture in particular makes a lot of sense to me. I also feel like I finally understand-enough-to-program-one the stable diffusion tool I use daily, after following up on “latent diffusion” from your mention of it.
Still. I feel like my brain has learned an algorithm that is of value itself apart from its learning capability, that extracting meaningful portions of my algorithm is possible, and that using it as a starting point, one could make fairly straightforward upgrades to it — for example adding some kind of direct conscious control of when to add new compiled modules — upgrades which could not be used by an active learning system, because e.g. an infant would fry their own brain if given conscious write access to it.
I’m convinced: “just learning specific specialized networks wired together in a certain way” could really be all there is to understand about brains. And my confidence in “but there exists some higher ideal intelligence algorithm” has fallen somewhat, but remains above 0.5.
And it actually sounds like you’re calling out a specific possible path forward (for raw capabilities): narrow AI that can handle updating its weights where needed.
[Lightly Claude-cleaned Transcript of me talking]
So I know it’s beside the point of your post, and by no means the core thesis, but I can’t help but notice that in your prologue you write this:
I have to admit this bugs me. It bugs me specifically because it triggers my pet peeve of “if only we had done the previous AI paradigm better, we wouldn’t be in this mess.” The reason why this bugs me is it tells me that the speaker, the writer, the author has not really learned the core lessons of deep learning. They have not really gotten it. So I’m going to yap into my phone and try to explain — probably not for the last time; I’d like to hope it’s the last time, but I know better, I’ll probably have to explain this over and over.
I want to try to explain why I think this is just not a good mindset to be in, not a good way to think about things, and in fact why it focuses you on possibilities and solutions that do not exist. More importantly, it means you’ve failed to grasp important dimensions of alignment as a problem, because you’ve failed to grasp important dimensions of AI as a field.
I think we can separate AI into multiple eras and multiple paradigms. If you look at these paradigms, there’s a lot of discussion about AI where the warrant for taking a particular concept seriously is kind of buried under old lore that, if you then examine it, makes the position much more absurd or much less easily justifiable than if you were just encountering it fresh — never having been suggested to you by certain pieces of evidence at certain times.
I would say that AI as a concept gets started in the 50s with the MIT AI lab. The very first AI paradigm is just fiddling around. There is no paradigm. The early definition of AI would include many things that we would now just consider software — compilers, for example, were at one point considered AI research. Basically any form of automation of human reasoning or cognitive labor was considered AI. That’s a very broad definition, and it lasts for a while. My recollection — and I’m just yapping into my phone rather than consulting a book — is that this lasts maybe until the late 60s, early 70s, when you get the first real AI paradigm: grammar-based AI.
It’s also important to remember how naive the early AI pioneers were. There’s the famous statement from the Dartmouth conference where they say something like, “we think if you put a handful of dedicated students on this problem, we’ll have this whole AGI thing solved in six months.” Just wildly, naively optimistic, and for quite a number of years. You can find interviews from the 60s where AI researchers believe they’re going to have what we would now basically consider AGI within a single-digit number of years. It in fact contributed to the first wave of major automation panic in the 60s — but that’s a different subject and I’d have to do a bunch of research to really do it justice.
The point is that it took time to be disabused of the notion that we were going to have AGI in a couple of years because we had the computer. Why did people ever think this in the first place? You look at all the computing power needed to do deep learning, you look at the computational requirements to run even a good compiler, and these computers back then were tiny — literally kilobytes of RAM, minuscule CPU power, minuscule memory. How could they ever think they were on the verge of AGI?
The answer is that their reasoning went: the kinds of computations the computer can be programmed to do — math problems, calculus problems — are the hardest human cognitive abilities. The things the computer does so easily are the hardest things for a human to do. Therefore, the reasoning went, if we’re already starting from a baseline of the hardest things a human can do, it should be very easy to get to the easiest things — like walking.
And this is where the naive wild over-optimism comes from. What we eventually learned was that walking is very hard. Even piloting a little insect body is very hard. Replicating the behavior of an insect — the pathfinding, the proprioceptive awareness, the environmental awareness of an insect — is quite difficult. Especially on that kind of hardware, it’s basically impossible.
Once people started to realize this, they settled into the first real AI paradigm: grammar-based AI. What people figured was that you have these compilers — the Fortran compiler, the Lisp interpreter had been invented by then, along with some elaborations. Compilers seem to be capable of doing complex cognitive work. They can unroll a loop, they can do these intricate programming tasks that previously required a dedicated person to hand-specify all the behaviors. A compiler is capable of fairly complex translation between a high-level program and the detailed behaviors the machine should do to implement that behavior efficiently — behaviors that previously would have had to be hand-specified by a programmer.
For anyone unfamiliar with compilers: the way a compiler basically works, as a vast oversimplification, is that it has a series of rules in what’s called a context-free grammar. The thing that distinguishes a context-free grammar from a natural grammar is that you are never reliant on context outside the statement itself for the meaning of the statement — or at least, any context you need, like a variable name, is formally available to the compiler. Statements in a context-free grammar have no ambiguity; there is always an unambiguous, final string you can arrive at. You never have to decide between two ambiguous interpretations based on context.
The thought process was: we have these compilers, and they seem capable of using a series of formal language steps to take high-level intentions from a person and translate them into behaviors. They even have, at least the appearance of, autonomy. Compilers are capable of thinking of ways to express the behavior of high-level code that the programmer might not even have thought of. There’s a sense of genuine cognitive autonomy from the programmer — you’re able to get out more than you’re putting in. I think there’s a metaphor like “some brains are like fishing, you put one idea in and you get two ideas out.” That seems like it was kind of the core intuition behind formal grammar AI: that a compiler follows individually understandable rules and yet produces behaviors that express what the programmer meant through ways the programmer would not have thought of themselves. You start to feel the machine becoming autonomous, which is very attractive.
This also lined up with the theories of thinkers like Noam Chomsky. The entire concept of the context-free grammar as distinct from the natural grammar is, my understanding is, a Chomsky concept. So it’s really the Chomsky era of AI. This is the era of systems like EURISKO. You also have computer algebra systems — Maxima being the classic example. A computer algebra system is the kind of thing that now we’d just consider software, but at the time it was considered AI.
This is one of the things John McCarthy famously complained about when he said, “if it starts to work, they stop calling it AI.” When they were developing systems like Maxima, those were considered AI. And what they were, were systems where you could give it an algebra expression and it would do the cognitive labor of reducing it to its final form using a series of production rules — which is everything a compiler does, as I was trying to explain. A compiler starts with a statement expressed in a formal grammar, applies a series of production rules — which you can think of as heuristics — and the grammar specification basically tells you: given this state of the expression, what is the next state I should transition to? You go through any number of steps until you reach a terminal, a state from which there are no more production rules to apply. It’s the final answer. When you’re doing algebra and you take a complex expression and reduce it to its simplest form using a series of steps, that’s basically what this is: applying production rules within a formal grammar to reduce it to a terminal state.
I’m not saying these systems were useless, especially the more practically focused ones like Maxima. But in terms of delivering autonomous, interesting thinking AI, they’re pretty lackluster. I think the closest we got, arguably, was EURISKO, and I’m kind of inclined to think that EURISKO is sort of fake. I don’t really believe most of that story.
The formal grammar paradigm has a couple of problems. I think the core problem is articulated fairly well by Allen Newell in the final lecture he gives before he dies. The core problem is something like: let’s ignore the problem of the production rules for a minute. Let’s say your production rules are perfect — you have a perfect set of problem-solving heuristics that can take you from a starting symbolic program state to a final problem solution. It doesn’t matter how brilliant your problem-solving heuristics are if you can’t even start the problem off in the right state.
To give a concrete example I use all the time: you want to go downstairs and fetch a jug of milk from the fridge. This is a task that essentially any person can do. Even people who score as mentally disabled on an IQ test can generally go down the stairs and grab a jug of milk from the fridge. It’s so basic we don’t even think of it as difficult. But then think about how you’d get a robot to do that autonomously — not programming it step by step to do one exact mechanical motion, but saying “hey, go grab me a jug of milk” and having it walk down the stairs, walk to the fridge, open the fridge, recognize the milk jug, grab it, and walk back. It’s completely intractable. It’s not just that the problem-solving heuristics can’t do it — the formal grammar approach of taking a formal symbol set and applying transformations to it cannot do this thing even in principle. There is no humanly conceivable set of problem-solving heuristics that is going to let you, starting from a raw bitmap of a room or hallway or stairs, autonomously identify the relevant features of the problem at each stage and accomplish the task. Not happening. And it’s not that it’s not happening because you’re not good enough. It’s not happening because the whole paradigm has no way to even conceive of how it would do this.
I could go into all kinds of reasons why problem-solving heuristics based on a formal grammar are just going to be intractable, but I do think Allen Newell has it exactly right. The fundamental problem is not just that this thing isn’t good enough — it really cannot be good enough even in principle. But even if you have the production rules part perfect, your paradigm still doesn’t even have a way in principle to do this extremely important thing that you would always want your AI to do and that humans empirically can do. So you can’t just appeal to its fundamental impossibility; clearly, there is a way to do this.
I really like the way Allen Newell phrases this when he says that the purpose of cognitive architecture as a field is to try to answer the question: how can the human mind occur in the physical universe? He threw that out as an articulation of the core question in his final lecture. I think it’s brilliant. We can now ask a different but closely related question: how can GPT occur in the physical universe? The difference is that this question is much more tractable.
So formal grammar AI didn’t work, and yet it was pursued for a very long time — arguably even as recently as the 90s, there were people genuinely still working on it. It never really died culturally or academically. I think the reason it never died academically is that it’s just aesthetically satisfying. Looking back on it, I think Dreyfus comparing it to alchemy was completely appropriate. It’s basically the Philosopher’s Stone — this very nice feel-good thing that it would be really cool if you could do. It’s an appealing myth, an attractive object in latent space that draws people towards it but from which they can’t escape. It’s an illusion. I honestly do not think formal grammar-based AI is a thing permitted by our universe to exist, at least not in the kind of way its creators envisioned it.
So what else can you do? The next paradigm is something like Victor Glushkov’s genetic algorithms. The idea there is probably quite similar to deep learning, but deep learning implements it in a way that is actually practically implementable. The way genetic algorithms are supposed to work is that you implement a cost function — what we today call a loss function — and you’re going to use random mutations on some discrete symbolic representation of the problem or solution. The cost function tells you if you are getting closer or farther from the solution, which means your problem needs to be at least differentiable in the sense that there’s a clear, objective way to score the performance of a solution and the scoring can be granular enough that you can know if you’re getting closer or farther based on small changes.
The first big problem you run into is that random mutations and discrete programs do not mix together well. How do you make a program representation where you can do these kinds of mutations? You need mutations that have a regular structure so they don’t just destroy your programs, or you need a form of program representation that works well under the presence of random mutations. That’s just really hard to do with discrete programs. I don’t think anyone ever really cracked it.
The other problem, which is related, is the credit assignment problem. You know, one good idea is: what if we constrain our mutations to the parts of the program that are not working? If we know roughly where the error is, we can constrain our mutations to that part instead of breaking random stuff that is functioning. That’s a great idea and it will definitely narrow your search space. But how do you do that? Unless you have some way to take the cost function and calculate the gradient of change with respect to the program representation, there’s no way to find the part of the program you need to modify. So what you end up doing is random mutations, and the search space is just way too wide.
Based on the intractability of this particular approach, a lot of people concluded that AGI was just not possible. There used to be a very common story that went something like: we can’t do AGI because human intelligence is the product of a huge program search undergone by evolution, and the way evolution did it was by throwing the equivalent of zettaflops of CPU processing power at it — amounts of compute we’ll just never have access to. Therefore, we’re not going to have AGI anytime this century, if ever, because you would basically have to recapitulate all of evolution to get something comparable to a human brain. And we know this because we tried the Glushkov thing and it did not work. I think you can see how that prediction turned out. But it was plausible at the time.
The other thing people started doing that was actually quite practical was expert systems. The way an expert system works is basically that you have a knowledge base and a decision tree. Where you get the decision tree is you take an actual human expert who knows how to do a task — say, flying an airplane — and you formally represent the problem state in a way legible to the decision tree. You just copy what a human would do at each state. These things often didn’t generalize very well, but if you did enough hours of human instruction and put the system into enough situations with a human instructor and recorded enough data and put it into a large enough decision tree with a large enough state space and had even a slight compressive mechanism for generalization — this was enough to do certain tasks, or at least start to approximate them, even if it would then catastrophically fail in an unanticipated situation.
And the thing is, this reminds one a lot of deep learning. I’m not saying deep learning is literally just a giant decision tree — I think the generalization properties of deep learning are too good for that. But deep learning does in fact have bizarre catastrophic failures out of distribution and is very reliant on having training examples for a particular thing. This story sounds very familiar. The expert system was also famously inscrutable. You’d make one, and you could ask how it accomplishes a task, and the interpretability chain would look like: at this state it does this, at this state it does this, at this state it does this. And if you want to know why it does that? Good luck. This story, again, sounds very familiar.
So then you have the next paradigm — expert systems are maybe the 90s — and then in the 2000s you get early statistical learning: Solomonoff-type things, boosting. Boosting trees is a clever method to take weak classifiers and combine them into stronger classifiers. If you throw enough tiny little classifiers together with uncorrelated errors, you get a strong enough signal to make decisions and do classification. There are certain problems you can do fairly well with boosting.
And then there’s 2012, you get AlexNet.
There’s a talk I really like from Alan Kay called “Software and Scaling” where he points out that if you take all the code for a modern desktop system — say, Microsoft Windows and Microsoft Office — that’s something like 400 million lines of code. If you stacked all that code as printed paper, it would be as tall as the Empire State Building. The provocative question he asks is: do you really need all that code just to specify Microsoft Word and Microsoft Windows? That seems like a lot of code for not that much functionality.
And I agree with him. Alan Kay’s theory for why it requires so much code is that it’s essentially malpractice on the part of software engineers — that software engineers work with such terrible paradigms, their abstractions are so bad, that 400 million lines is just what it takes to express it with their poor understanding. If we had a better ontology, a better kind of abstraction, we could express it much more compactly.
I agreed, and for a long time I just accepted this as the case — this was also my hypothesis. What I finally realized after looking at deep learning was that I was wrong.
Here’s the thing about something like Microsoft Office. Alan Kay will always complain that he had word processing and this and that and the other thing in some 50,000 or 100,000 lines of code — orders of magnitude less code. And here’s the thing: no, he didn’t. I’m quite certain that if you look into the details, what Alan Kay wrote was a system. The way it got its compactness was by asking the user to do certain things — you will format your document like this, when you want to do this kind of thing you will do this, you may only use this feature in these circumstances. What Alan Kay’s software expected from the user was that they would be willing to learn and master a system and derive a principled understanding of when they are and are not allowed to do things based on the rules of the system. Those rules are what allow the system to be so compact.
You can see this in TeX, for example. The original TeX typesetting system can do a great deal of what Microsoft Word can do. It’s somewhere between 15,000 and 150,000 lines of code — don’t quote me on that, but orders of magnitude less than Microsoft Word. And it can do all this stuff: professional quality typesetting, documents ready to be published as a math textbook or professional academic book, arguably better than anything else of its kind at the time. And the way TeX achieves this quality is by being a system. TeX has rules. Fussy rules. TeX demands that you, the user, learn how to format your document, how to make your document conform to what TeX needs as a system.
Here’s the thing: users hate that. Despise it. Users hate systems. The last thing users want is to learn the rules of some system and make their work conform to it.
The reason why Microsoft Word is so many lines of code and so much work is not malpractice — it would only be malpractice if your goal was to make a system. Alan Kay is right that if your goal is to make a system and you wind up with Microsoft Word, you are a terrible software engineer. But he’s simply mistaken about what the purpose of something like Microsoft Word is. The purpose is to be a virtual reality — a simulacrum of an 80s desk job. The purpose is to not learn a system. Microsoft Word tries to be as flexible as possible. You can put thoughts wherever you want, use any kind of formatting, do any kind of whatever, at any point in the program. It goes out of its way to avoid modes. If you want to insert a spreadsheet into a Word document anywhere, Microsoft Word says “yeah, just do it.”
It’s not a system. It’s a simulacrum of an 80s desk job, and because of that the code bloat is immense, because what it actually has to do is try to capture all the possible behaviors in every context that you could theoretically do with a piece of paper. Microsoft Word and PDF formats are extremely bloated, incomprehensible, and basically insane. The open Microsoft Word document specification is basically just a dump of the internal structures the Microsoft Word software uses to represent a document, which are of course insane — because Microsoft Word is not a system. The implied data structure is schizophrenic: it’s a mishmash of wrapped pieces of media inside wrapped pieces of media, with properties, and they’re recursive, and they can contain other ones. This is not a system.
For that reason, you wind up with 400 million lines of code. And what you’ll notice about 400 million lines of code is — hey, that’s about the size of the smallest GPT models. You know, 400 million parameters. If you were maximally efficient with your representation, if you could specify it in terms of the behavior of all the rest of the program and compress a line of code down on average to about one floating point number, you wind up with about the size of a small GPT-2 type network. I don’t think that’s an accident. I think these things wind up the size that they are for very similar reasons, because they have to capture this endless library of possible behaviors that are unbounded in complexity and legion in number.
I think that’s a necessary feature of an AI system, not an incidental one. I don’t think there is a clean, compressed, crisp representation. Or at least, to the extent there is a clean crisp representation of the underlying mechanics, I think that clean crisp implementation is: gradient search over an architecture that implements a predictive objective. That’s it. Because the innards are just this giant series of ad hoc rules, pieces of lore and knowledge and facts and statistics, integrated with the program logic in a way that’s intrinsically difficult to separate out, because you are modeling arbitrary behaviors in the environment and it just takes a lot of representation space to do that.
And if the expert system — just a decision tree and a database — winds up basically uninterpretable and inscrutable, you better believe that the 400-million-line Microsoft Office binary blob is too. Or the 400-million-parameter GPT-2 model that you get if you insist on making a simulacrum of the corpus of English text. These things have this level of complexity because it’s necessary complexity, and the relative uninterpretability comes from that complexity. They are inscrutable because they are giant libraries of ad hoc behaviors to model various phenomena.
Because most of the world is actually complication. This is another thing Alan Kay talks about — the complexity curve versus the complication curve. If you have physics brain, you model the world as being mostly fundamental complexity with low Kolmogorov complexity, and you expect some kind of hyperefficient Solomonoff induction procedure to work on it. But if you have biology brain or history brain, you realize that the complication curve of the outcomes implied by the rules of the cellular automaton that is our reality is vastly, vastly bigger than the fundamental underlying complexity of the basic rules of that automaton.
Another way to put this, if you’re skeptical: the actual program size of the universe is not just the standard model. It is the standard model plus the gigantic seed state after the Big Bang. If you think of it like that, you realize the size of this program is huge. And so it’s not surprising that the model you need to model it is huge, and that this model quickly becomes very difficult to interpret due to its complexity.
This also applies when you go back to thinking about distinct regions of the brain. When we were doing cognitive science, a very common approach was to take a series of ideas for modules — you have a module for memory, a module for motor actions or procedures, one for this, one for that — and wire them together into a schematic and say, “this is how cognition works.” This is the cognitive architecture approach, which reaches its zenith in something like the ACT-R model — where you have production rules that produce tokens, by the way. And if you’re influenced by this “regions of the brain” perspective, you are thinking in terms of grammar AI. Even if you say “no, no, I didn’t want to implement grammar AI, I want to implement it as a bunch of statistical learning models that produce motor tokens” — uh huh. Yeah, exactly. And let me guess, you’re going to hook up these modules like the cognitive architecture schematic? Well, buddy.
At the time we were doing cognitive architecture, the only thing we knew about intelligence was that humans have it. If we take the brain and look at natural injuries — we’re largely not willing to deliberately cause injuries just to learn what they do, but we can take natural lesions and say: a lesion here causes this capability to be disrupted, and one here is associated with these capabilities being disrupted. Therefore, this region must cause these capabilities. That’s a fair enough inference. But because your only known working example is this hugely complex thing —
Imagine if we had GPT as a black box and didn’t know anything about it. You could have some fMRI-style heat map of activations in GPT during different things it does, and you’d say, “oh, over here is animals, over here is this, over here is that.” Then you start knocking out parts and say, “ah, this region does this thing, and that region does that thing, and therefore these must be a series of parts that go together.” You would probably be very confused. This would probably not bring you any closer to understanding the actual generating function of GPT.
I get this suspicion when I think about the brain and its regions. Are they actually, meaningfully, like a parts list? Like a series of gears that go together to make the machine move? Or is it more like a very rough set of inductive biases that then convergently reaches that shape as it learns? I have no idea. I assume there must be some kind of architecture schematic, especially because there are formative periods — and formative periods imply an architecture, kind of like the latent diffusion model where you train a VQVAE and then train a model on top of it. Training multimodal encoders on top of single-modality encoders seems like the kind of thing you would do in a brain, so I can see something like that.
But just looking at the architecture of the brain — which you can do on Google Scholar — you learn, for example, about Wernicke’s area and Broca’s area. Wernicke’s area appears to be an encoder-decoder language model. If you look at the positioning of Wernicke’s area and what other parts of the brain are around it, you realize it seems to be perfectly positioned to take projections from the single and multimodal encoders in the other parts of the brain. So presumably Wernicke’s area would be a multimodal language encoding model that takes inputs from all the other modalities, and then sends the encoded idea to Broca’s area, which translates it into motor commands. It is a quite legible architecture, at least to me.
I think if you did actually understand it, you would basically understand each individual region in about as much detail as you understand a GPT model. You’d understand its objective, you’d understand how it feeds into other models. You wouldn’t really understand how it “works” beyond that, because the answer to that question is: like, not how things work. Things don’t — I don’t know how to explain to you. I don’t think there is like a master algorithm that these things learn. I don’t think there was some magic one weird trick that, if you could just pull it out of the network, would make it a thousand times more efficient. I don’t think that’s what’s going on.
The thing with latent diffusion, for example, is that it turns out to be very efficient to organize your diffusion model in the latent space of a different model and then learn to represent concepts in that pre-existing latent space. I would not be surprised if the brain uses that kind of trick all the time, and that the default is to train models in the latent space of another model. So it’s not just a CLIP — it’s a latent CLIP. You have raw inputs that get encoded, then a model that takes the encoded versions and does further processing to make a multimodal encoding, which is then passed on to some other network that eventually gets projected into Wernicke’s area, and so on.
The things that you would find if you took apart the brain and separated it into regions — if you look at fMRI studies on which we base claims about “a region for something” — often what’s being tested is something like a recognition test: if you show someone a face, what part of the brain lights up? And you test on maybe three things, and you say “oh, this part of the brain is associated with recognizing faces doing this and that, therefore this is the face-recognizing region.” You have to ask yourself: is it the face-recognizing region of the brain, or is recognizing faces just one of the three things anyone happened to test? It’s not like there are that many fMRI brain studies. There’s a limited number of investigations into what part of what is encoded where.
There’s a study out there where they show people Pokémon and find a particular region of the brain where Pokémon get encoded. And if you said, “ah yes, this is the Pokémon region, dedicated to Pokémon” — obviously there are no Pokémon in the ancestral environment, and obviously that would be imbecile reasoning. So there’s a level of skepticism you need when reading studies that say “this is the region of the brain dedicated to this.” Is it dedicated to that, or is that just one of the things it processes?
I think the brain is quite legible if you interpret it as a series of relatively general-purpose networks that are wired together to be trained in the latent space of other networks. It’s a fairly legible architecture if you interpret it that way, in my opinion.
And so. What I’m trying to say is: there is no royal road to understanding. There’s no magic. There’s no “ah yes, if we just had a superior science of how the brain really works” — nope. This is how it really works. The way it really, really works is: while you’re doing things, you have experiences, and these experiences are encoded in some kind of context window. I don’t know exactly how the brain’s context window works, but depending on how you want to calculate how many tokens the brain produces per second in the cognitive architecture sense, I personally choose to believe that the brain’s context window is somewhere between one to three days worth of experience. The last time I napkin-mapped it, it was something like 4.75 million tokens of context — maybe it was 7 million, I don’t remember the exact number, but I remember it was more tokens than Claude will process in a context, but a single-digit number of millions. At some point you’ll hit that threshold, and then you’ll be able to hold as many experiences in short-term memory as a human can.
Then the next thing you do: things that you don’t need right away, things that don’t need to be in context, you do compacting on. How does compacting work? Instead of just throwing out the stuff you don’t need, you kind of send it to the hippocampus to be sorted — either it gets tagged as high salience and you need to remember it, or it fades away on a fairly predictable curve, the classic forgetting curve. And that’s good enough to give you what feels like seamless recall of your day.
But the problem is, just like with GPT, this is not quite real learning. It’s in-context learning, but it’s not getting baked into the weights. It’s not getting fully integrated into the rest of your epistemology, the rest of your knowledge. This is an approach that doesn’t really fully scale. So while you’re asleep, you take those memories that have made it from short-term memory into the hippocampus, and you migrate them into long-term memory by training the cortex with them — training the prefrontal cortex.
And when you do this, it’s slow. We can actually watch this: we happen to know that the hippocampus will send the same memory over and over and over to learn all the crap from it. What that implies is that if you had to do this in real time, it would be unacceptably slow, in the same way that GPT weight updates are unacceptably slow during inference. The way you fix it is by amortizing — you schedule the updates for later, and you do some form of active learning to decide what things to offload from the hippocampus into long-term memory. There is no trick for fast learning. The same slow updates in GPT weights are the same slow updates in human weights. The trick is just that you don’t notice them because you’re mostly updating while you’re asleep. The things you do in the meantime are stopgaps — the human brain architecture equivalent of things like RAG, like vector retrieval.
The hippocampus, by the way, actually does something more complicated than simple vector retrieval. It’s closer to something like: you give the hippocampus a query, it takes your memories and synthesizes them into an implied future state, and then prompts the prefrontal cortex with it in order to get the prefrontal cortex to do something like tree search to find a path that moves the agent to that outcome. This prompt also just happens to come with the relevant memories you queried for.
And if you ask what algorithm the hippocampus implements — we actually happen to know this one. The hippocampus is trained through next-token prediction, like GPT. It is trained using dopamine reward tagging, and based on the strength of the reward tagging and emotional tagging in memories, it learns to predict future reward tokens in streams of experience. Interestingly, my understanding is that the hippocampus is one of the only networks trained with next-token prediction.
The longer I think about it, the more it makes sense. When I was thinking about how you’d make a memory system with good sparse indexing, I kept concluding that realistically you need the hippocampus to perform some kind of generally intelligent behavior in order to make a really good index — it needs contextual intelligence to understand “this is the kind of thing you would recall later.” When I thought about how to do that with an AI agent, I just ended up concluding that the easiest thing would be to have GPT write tags for the memories, because you just want to apply your full general intelligence to it. Well, if that’s just the easiest way to do it, it would make total sense for the hippocampus to be trained with next-token prediction.
Does that help you with AI alignment? Not really, not very much. But if you were to take apart the other regions of the brain, it’s like: mono-modal audio encoder. You look at something like the posterior superior temporal sulcus, and if you read about it and look at what gets damaged when it’s lesioned, what it’s hooked up to, what other regions it projects into and what projects into it — you can really easily point at these and say, “oh, that’s a multimodal video encoder.” By the way, the video encoder in humans is one of the unique parts of the human brain. You have a very big prefrontal cortex and a seemingly unique video encoder. Other animals like rats seem to have an image encoder — something like a latent CLIP — but not a video encoder. Interesting to think about how that works.
Again, these parts are not like — look, I just don’t understand what you expect to find. Of course it’s made out of stuff. What else, how else would it work? Of course there’s a part where you have an encoder and then you train another network in the latent space of that model. Well, if that’s how you organize things — and of course that’s how you organize things, duh, that’s the most efficient way to organize a brain. The thing with latent diffusion is that it turns out to be very efficient to organize your diffusion model in the latent space of a different model. I would not be surprised if the brain uses that kind of trick all the time and that the default is to train models in the latent space of another model where possible.
So it’s not just a CLIP, it’s a latent CLIP. You have raw inputs, those get encoded, then you have a model that takes the encoded versions and does further processing to make a multimodal encoding, which is then passed on to some other network that eventually gets projected into Wernicke’s area, and so on. The things you would find if you took apart the brain into separate regions — I think it’s a quite legible architecture if you just interpret it as a series of relatively general-purpose networks wired together to be trained in the latent space of other networks.
And the trick is that there is no trick. The way “general intelligence” works is that you are a narrow intelligence with limited out-of-distribution generalization, and this is obscured from you by the fact that while you are asleep, your brain is rearranging itself to try to meet whatever challenges it thinks you’re going to face the next day.
This is why, for example, if you’re trying to learn a really motor-heavy action video game, like a really intense first-person shooter, and you’re drilling the button sequences over and over and it’s just not clicking — and then you go to sleep, do memory consolidation, wake up, and suddenly you’re nailing it. What’s actually going on is that the motor actions that were previously too slow, too conscious, not quite clicking as in-context learning — the brain said “this needs to be a real weight update” and prioritized moving those to the front of the queue. Now they’re actually in the prefrontal cortex as motor programs that can be executed immediately and are integrated into the rest of the intuitive motor knowledge. You’re not magically generalizing out of distribution. You updated your weights. You generalized out of distribution by updating the model. I know, incredible concept. But there it is.
EDIT: Viktor Glushkov apparently did not invent genetics algorithms, but early precursor work to them as an approach. And people act like LLM confabulations aren’t a thing humans do. :p
I really liked this comment-essay! I learned a lot from it, and think it could be turned into a top-level post in its own right.
Great ramble, but I feel like adopting this thesis doesn’t make me feel any better about smarter-than-human AGI alignment. Rather, I would feel awful, because in your sketched-out world you just cannot realistically reach the level of understanding you would need to feel safe ceding the trump card of being the smartest kind of thing around. Safety is not implied if you really really take the Bitter Lesson to heart. (Not implying that your above comment says otherwise: as you suggest, the ramble is not cutting at Zack’s main thesis here.)
More directly to your point, though, we do sometimes extract the clean mathematical models embedded inside of an otherwise messy naturalistic neural network. Most striking to me is the days of the week group result: if you know how to look at the thing from the right angle, the clean mathematical structure apparently reveals itself. (Now admittedly, the whole rest of GPT-2 or whatever is a huge murky mess. So the stage of the science we’re groping towards at the moment is more like “we have a few clean mathematical models that really shine of individual phenomena in neural networks” than “we have anything like a clean grand unified theory.” But confusion is in the map, not in the territory, and all that, even if a particular science is extraordinarily difficult.)
Confusion can in fact be in the irreducible complexity and therefore in the territory. “It is not possible to represent the ‘organizing principle’ of this network in fewer than 500 million parameters, which do not fit into any English statement or even conceivably humanly readable series of English statements.”, Shannon entropy can be like that sometimes.
I think there are achievable alignment paths that don’t flow through precise mechanistic interpretability. I should write about some of them. But also I don’t think what I’m saying precludes as you say having understanding of individual phenomena in the network, it’s mostly an argument against there being a way more legible way you could have done this if people had just listened to you, that is not probably not true and your ego has to let it go. You have to accept the constraints of the problem as they appear to present themselves.
Well, you don’t have to do anything but unless you have some kind of deep fundamental insight here your prior should be that successful alignment plans look more like replying on convergence properties than they do on aesthetically beautiful ‘clean room’ cognitive architecture designs. There might be some value in decomposing GPT into parts, but I would submit these parts are still going to form a system that is very difficult to predict all the downstream consequences of in the way I think people usually imply when they say these things. You know, that they want it to be like a rocket launch where we can know in principle what coordinate position X, Y, Z we will be in at time t. I think the kinds of properties we can guarantee will be more like “we wind up somewhere in this general region in a tractable amount of time so long as an act of god does not derail us”.
Please do! I am very interested in this sort of thinking. Is there preexisting work you know of that runs along the lines of what you think could work?
What sources have you used to derive your understanding of brain function from?
I basically agree with the intended point that general intelligence in a compute-limited world is necessarily complicated (and think that a lot of people are way too invested in trying to simplify the brain into the complexity of physics), but I do think you are overselling the similarities between deep learning and the brain, and in particular you are underselling the challenge of actually updating the model, mostly because unlike current AIs, humans can update their weights at least once a day always, and in particular there’s no training date cutoff after which the model isn’t updated anymore, and in practice human weight updates almost certainly have to be done all the time without a training and test separation, whereas current AIs do update their weights, but it lasts only a couple of months in training and then the weights are frozen and served to customers.
(For those in the know, this is basically what people mean when they talk about continual learning).
So while there are real similarities, there are also differences.
I would slightly change this, and say that if you can’t brute-force simulate the universe based on it’s fundamental laws, you must take into account the seed, but otherwise a very good point that is unheeded by a lot of people (the change doesn’t matter for AI capabilities in the next 50-100 years, and it also doesn’t matter for AI alignment with p(0.9999999), but does matter from a long-term perspective on the future/longtermism.
Re: no human training/test separation:
Epistemic status: random thought I just had, but what if there kind of is. I think maybe dreaming is the “test” part of the training cycle: the newly updated weights run against outcome predictions supplied by parts of the system not currently being updated. The being-updated part tries to get desirable outcomes within the dream, and another network / region plays Dungeon Master, supplying scenario and outcomes for given actions. Test against synthetic test data, supplied by a partially adversarial network.
I feel like, if true, we’d expect to see some kind of failures to learn-from-sleep in habitual lucid dreamers? Or reduced efficacy, anyway? I wonder what happens in a learning setup which is using test performance to make meta training decisions, if you hack the test results to erroneously report greater-than-actual performance…? Are there people who do not dream at all (as distinguished from merely not remembering dreams)?
This model of “what even is a dream, anyway?” makes a lot more predictions/retrodictions than my old model of “dreams are just the qualia of neuronal sub populations coming back online as one wakes up”.
What is a CLIP?
I disagree, and think your analogy to MS Word may be where the crux lies. We could only build MS word because it relies on a bunch of simple, repeated abstractions that keep cropping up (e.g. parsers, rope data structures etc.) in combination with a bunch of random, complex crud that is hard to memorize. The latter is what you’re pointing at, but that doesn’t mean there aren’t a load of simple, powerful abstractions underyling the whole thing which, if you understand them, you can get the program to do pretty arbitrary things. Most of the random high complexity stuff is only needed locally, and you can get away with just understanding the bulk structure of a chunk of the program and whatever bits of trivia you need to accomplish whatever changes you want to make to MS Word.
This is unlike the situation with LLMs, which we don’t have the ability to create by hand, or to seriously understand an arbitrary section of its functionality. Thoughmaaaaybe we could manage to engineer something like GPT-2 right now but I’d bet against that for GPT-3 onwards.
Would we really say that a human is a “narrow intelligence” when trying any new task until they sleep on it? I think the only thing that would meet the definition of “general intelligence” that this implies is something that generalize to all situations, no matter how foreign. By that definition, I’m not sure if general intelligence is possible.
Wow. Thanks a lot for that. Your depiction of brain architecture in particular makes a lot of sense to me. I also feel like I finally understand-enough-to-program-one the stable diffusion tool I use daily, after following up on “latent diffusion” from your mention of it.
Still. I feel like my brain has learned an algorithm that is of value itself apart from its learning capability, that extracting meaningful portions of my algorithm is possible, and that using it as a starting point, one could make fairly straightforward upgrades to it — for example adding some kind of direct conscious control of when to add new compiled modules — upgrades which could not be used by an active learning system, because e.g. an infant would fry their own brain if given conscious write access to it.
I’m convinced: “just learning specific specialized networks wired together in a certain way” could really be all there is to understand about brains. And my confidence in “but there exists some higher ideal intelligence algorithm” has fallen somewhat, but remains above 0.5.
And it actually sounds like you’re calling out a specific possible path forward (for raw capabilities): narrow AI that can handle updating its weights where needed.