GPT-4 has already been trained on lots of human language. Let’s talk instead about a transformer initialized with random weights (xavier initialization or whatever).
Starting right from the random xavier initialization, you are not allowed to (pre)train it on any human language at all. None. No text. No audio of humans speaking. No video of humans speaking. Absolutely none at all. Do you think that could wind up with grammatical language? If not, then I claim this is a nice demonstration (one of many) of how human child brains are doing something different than the kind of AI you have in mind.
The LM does indeed start training with random initialization and has to learn new languages. So then the question is why are humans more sample efficient than LM’s? I am not sure about this, and I am not even sure of the premise. It sometimes feels like GPT-4 can read something once that I would need to read a few times. Which is to say that sample efficiency may be a function of how many tokens you have already seen (I would greatly appreciate a graph showing this). So it could be the case that humans are just a particular kind of pre-trained. But normally pre-training does include language, and babies don’t seem to be pre-seeded with the languages of their ancestors. So what can that pre-training contain? Well probably interaction with some sufficiently complex yet predictable environment that responds to their action space (tokens). Maybe you could do meta learning from this stage to create an LM which can learn a language from few samples. But even the smaller model may be difficult to encode directly in the genome, and it could be easier to specify parts of those models as a reward function, which when followed will lead to reconstructing those pre-trained models.
But your point here is that ML models are not like people in this way. Some other differences that I tentatively think currently exist are that LMs are faster than people, people are more sample efficient than LMs, and LMs tend to get stuck when making long term plans at the moment (try Auto GPT for instance).
I believe you are pointing out that there are differences in people and LMs to demonstrate that the space of competence intelligences is wide. The (admittedly rephrased) point I made in response to this earlier was that while there are many intelligences that are beyond some level of competence, I expect competitive pressures to ramp up as a function of intelligence (related). This is because I think that a system’s optimization ability (aka intelligence) is a monotonic function of its ability to communicate internally and externally (flagging that I am quantifying communication via Bayesian information). Optimization abilities scale with communication because communication allows you to recruit more computational resources for a given problem. Going back to the main point, I think that the design space of competitive intelligences will end up converging, and the only reason that it hasn’t sufficiently converged yet is that we are not smart enough.
Your OP doesn’t say “auto-regressive training & prompting”, rather it says “an AI is somehow trained in a way that is analogous to a human childhood in all of the relevant ways”. I don’t think the kinds of AIs and training procedures that you have in mind are at all analogous to a human childhood. Children will do things that they want to do without being “prompted” by anyone. Children are not exposed to 45 TB of internet text while in the womb. Etc. Right??
I did not go into detail about what I believed were the ‘relevant ways’ because I thought that talking about communication and such would be too philosophical and drag out the post. But I do understand that it might make the reader suspicious that I am circularly defining the ‘relevant ways’ in terms of humans. Of course, I need to use my baseline of humans in order to guess what future values might look like, in which case this is the same kind of circular as any scientific theory which uses data from the universe to predict other data from the universe.
Is that what you’ve ben thinking of this whole time? You didn’t even mention decision transformers until just now. (Or did I miss it?)
My proposal (linked again for convenience) and toolformer (in an earlier comment) also train auto-regressively on a modified prompt. I was including this when talking about auto-regressive training + prompting. This is what I was trying to communicate by saying “Also for the record I am talking about reshaping the prompt during and not just after regular auto-regressive training”.
Let me put it this way. Suppose I understood how human brains worked sufficiently well that I could make an AI that was doing all the same things as a human child brain, for the same reasons, i.e. due to the same underlying algorithms. Then I put this AI in a human body and raise it in a loving human family.
From my perspective, this would be the most central example possible of “an AI is somehow trained in a way that is analogous to a human childhood in all of the relevant ways”.
But from your perspective, I feel like you’re going to say “Oh no no no, that’s totally different from the thing I’m talking about in this post.”
Yes, that would be a central example, and I would wish you the best of luck getting it done in time.
(After all, human brains incorporate many features that do not increase the communication of the system that they are embedded in. Sociopathy has not been selected out of humans. Some human children are introverted and we’re OK with that. Etc. etc.)
To say this you would have to argue that humans without this feature would have led a faster singularity, more or less. My point earlier with respect to sociopathy was that it is only selected out to the degree that it manifests in anti-social behavior. If your sociopath ends up producing some company that produces net value for organisms at various levels of abstraction, evolution counts that as a win. That introvert might invent the steam engine, letting people interact from farther away and extract more energy from their environment so you can make more people who start the cycle over again. Not that inventing the steam engine likely enough for evolution to pick it up specifically—I am just trying to say that the action spaces is much wider than the words that you verbalize.
If so, do you see why the post title & intro come across as misleading?
The antecedent has not been fulfilled if I am understanding what “if so” is pointing at correctly.
I agree that my statements about sample efficiency do not address this point. I do think you could get transformers to invent language, without seeing language data. You would want to use online learning in an observation, state, action loop while interacting with an environment, and probably include optimizations from ReAct, Reflexion, AutoGPT, and Voyager. But each of these relies on having some core language model that can do reasoning, and the way that we normally get these is by pre-training on language. I could imagine instead on pre-training on solutions to another problem that is arbitrarily hard to compute, simple to verify, and provides a natural learning gradient. For example, the LM could be given a numpy program f and an output f(x) and get loss L2(f(x),f(y)) for guess y. Or it could try to guess zeros of polynomials and get loss and be penalized according to the guess squared. Then put the agents together in a way such that they can communicate through their input and output channels, and I suspect that they will be able to create language. Maybe language is not so hard—level 1 is just using words to point at concepts you already have. Then learning how to compose those words is just a matter of more time-steps, given sufficient parameter capacity in your networks.
I am saying it is hard to know if a feature of a person gives rise to better communication in the whole group, which makes my theory conveniently hard to test. And then I am pointing at the singularity as a limiting object (from our point of view) of increasing communication, that follows in a trend after DNA, language, the printing press, phones, the internet, and AI.
For the bullets:
Agree, and I think that AI won’t last long in the world, but it might last long enough to destroy humans.
Thank you for bringing my post into an empirical domain I had not been thinking about. So I will modify my claim to ‘there exists a competence level α such that for all agents with competence level β>=α, nurture matters more than nature’, where ‘matters more than’ also needs to be made precise. Now the question is locating α, for which it would be useful for me to understand how common it is for a person to have a high quality upbringing (in a multi-faceted sense) and end up self-interested. Though I wonder if size of moral circle is the right metric.