Summary of my response: before you can train a really powerful AI, someone else can train a slightly worse AI.
Yeah, and before you can evolve a human, you can evolve a Homo erectus, which is a slightly worse human.
I might be wrong about this, but my impression was that the rise of human culture and civilization was timed with the end of the Pleistocene, rather than timed with the development of better (and more general) brains.
My guess is that modern humans probably do have more general brains than Homo erectus that came before us. But if Homo erectus had not been living in a geological epoch of repeated glaciations, then perhaps we would have seen a simpler Homo erectus civilization?
In general, I don’t yet see a strong reason to think that our general brain architecture is the sole, or potentially even primary reason why we’ve developed civilization, discontinuous with the rest of the animal kingdom. A strong requirement for civilization is the development of cultural accumulation via language, and more specifically, the ability to accumulate knowledge and technology over generations. Just having a generalist brain doesn’t seem like enough; for example, could there have been a dolphin civilization?
If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying “Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it’s an hour or twelve days, given anything like current setups.” Also insert boilerplate “essentially constant human brain architectures, no recursive self-improvement, evolutionary difficulty curves bound above human difficulty curves, etc.” for more despair.
I guess even though I don’t disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don’t see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don’t even approach the physical limits on speed, we can’t run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn’t apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization. I am deeply impressed by what has come out of the bare minimum of human innovative ability plus cultural accumulation. You say “The engine is slow,” I say “The engine hasn’t stalled, and look how easy it is to speed up!”
I’m not sure I like using the word ‘discontinuous’ to describe any real person’s position on plausible investment-output curves any longer; people seem to think it means “intermediate value theorem doesn’t apply,” (which seems reasonable) when usually hard/fast takeoff proponents really mean “intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals.”
I guess even though I don’t disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don’t see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don’t even approach the physical limits on speed, we can’t run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn’t apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization.
I agree with this, and I think that you are hitting on a key a reason that these debates don’t hinge on what the true story of the human intelligence explosion ends up being. Whichever of these is closer to the truth
a) the evolution of individually smarter humans using general reasoning ability was the key factor
b) the evolution of better social learners and the accumulation of cultural knowledge was the key factor
...either way, there’s no reason to think that AGI has to follow the same kind of path that humans did. I found an earlier post on the Henrich model of the evolution of intelligence, Musings on Cumulative Cultural Evolution and AI. I agree with Rohin Shah’s takeaway on that post :
I actually don’t think that this suggests that AI development will need both social and asocial learning: it seems to me that in this model, the need for social learning arises because of the constraints on brain size and the limited lifetimes. Neither of these constraints apply to AI—costs grow linearly with “brain size” (model capacity, maybe also training time) as opposed to superlinearly for human brains, and the AI need not age and die. So, with AI I expect that it would be better to optimize just for asocial learning, since you don’t need to mimic the transmission across lifetimes that was needed for humans.
(To be clear, the thing you quoted was commenting on the specific argument presented in that post. I do expect that in practice AI will need social learning, simply because that’s how an AI system could make use of the existing trove of knowledge that humans have built.)
I’m not sure I like using the word ‘discontinuous’ to describe any real person’s position on plausible investment-output curves any longer; people seem to think it means “intermediate value theorem doesn’t apply,” (which seems reasonable) when usually hard/fast takeoff proponents really mean “intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals.”
FWIW when I use the word discontinuous in these contexts, I’m almost always referring to the definition Katja Grace uses,
We say a technological discontinuity has occurred when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. We measure the size of a discontinuity in terms of how many years of past progress would have been needed to produce the same improvement. We use judgment to decide how to extrapolate past progress.
This is quite different than the mathematical definition of continuous.
In general, I don’t yet see a strong reason to think that our general brain architecture is the sole, or potentially even primary reason why we’ve developed civilization, discontinuous with the rest of the animal kingdom. A strong requirement for civilization is the development of cultural accumulation via language, and more specifically, the ability to accumulate knowledge and technology over generations.
In The Secrets of Our Success, Joe Henrich argues that without our stock of cultural knowledge, individual humans are not particularly more generally intelligent than apes. (Neanderthals may very well have been more generally intelligent than humans—and indeed, their brains are bigger than ours.)
And, he claims, to the extent that individual humans are now especially intelligent, this was because of culture-driven natural selection. For Henrich, the story of human uniqueness is a story of a feedback loop: increased cultural know-how, which drives genetic selection for bigger brains and better social learning, which leads to increased cultural know-how, which drives genetic selection for bigger brains….and so forth, until you have a very weird great ape that is weak, hairless, and has put a flag on the moon.
Note: this evolution + culture feedback loop is still a huge discontinuity that led to massive changes in relatively short evolutionary time!
Just having a generalist brain doesn’t seem like enough; for example, could there have been a dolphin civilization?
Heinrich speculates that a bunch of idiosyncratic features came together to launch us into the feedback loop that led to us being cultural species. Most species, including dolphins, do not get onto this feedback loop because of a “startup” problem: bigger brains will give a fitness advantage only up to a certain point, because individual learning can only be so useful. For there to be further selection for bigger brains, you need a stock of cultural know-how (cooking, hunting, special tools) that makes individual learning very important for fitness. But, to have a stock of cultural know-how, you need big brains.
Heinrich speculates that humans overcame the startup problem due to a variety of factors that came together when we descended from the trees and started living on the ground. The important consequences of a species being on the ground (as opposed to in the trees):
It frees up your hands for tool use. Captive chimps, which are more “grounded” than wild chimps, make more tools.
It’s easier for you to find tools left by other people.
It’s easier for you to see what other people are doing and hang out with them. (“Hang out” being inapt, since that’s precisely not what you’re doing).
You need to group up with people to survive, since there are terrifying predators on the ground. Larger groups offer protection; these larger groups will accelerate the process of people messing around with tools and imitating each other.
Larger groups also produce new forms of social organization. Apparently, in smaller groups of chimps, the reproductive strategy that every male tries to follow is “fight as many males as you can for mating opportunities.” But in a larger group, it becomes better for some males to try to pair bond – to get multiple reproductive opportunities with one female, by hanging around her and taking care of her.
Pair bonding in turn allows for more kinship relationships. Kinship relationships mean you grow up around more people; this accelerates learning. Kinship also allows for more genetic selection for big-brained, slow-developing learners: it becomes less prohibitively costly to give birth to big-brained, slow-growing children, because more people are around to help out and pool food resources.
This story is, by Henrich’s own account, quite speculative. You can find it in Chapter 16 of the book.
In The Secrets of Our Success, Joe Henrich argues that without our stock of cultural knowledge, individual humans are not particularly more generally intelligent than apes.
I 75% agree with this, but I do think that individual humans are smarter than individual chimpanzees. A big area of disagreement is distinguishing between “intrinsic ability to innovate” vs. “ability to process culture”, and whether it’s even possible to distinguish the two. I wrote a post about this two years ago.
For Henrich, the story of human uniqueness is a story of a feedback loop: increased cultural know-how, which drives genetic selection for bigger brains and better social learning, which leads to increased cultural know-how, which drives genetic selection for bigger brains….and so forth, until you have a very weird great ape that is weak, hairless, and has put a flag on the moon.
This is the big crux for me on the evolution of humans and its relevance to the foom debate.
Roughly, I think Henrich’s model is correct. I think his model provides a simple, coherent explanation for why humans dominate the world, and why it happened on such a short timescale, discontinuously with other animals.
Of course, intelligence plays a large role on his model: you can’t get ants who can go to the moon, no matter how powerful their culture. But the the great insight is that our power does not come from our raw intelligence: it comes from our technology/culture, which is so powerful because it was allowed to accumulate.
Cultural accumulation is a zero-to-one discontinuity. That is, you can go a long time without any of it, and then something comes along that’s able to do it just a little bit and then shortly after, it blows up. But after you’ve already reached one, going from “being able to accumulate culture at all” to “being able to accumulate it slightly faster” does not give you the same discontinuous foom as before.
We could, for example, imagine that an AI that can accumulate culture slightly faster than other humans. Since this AI is only slightly better than humans, however, it doesn’t go and create its own culture on its own. Unlike the humans—who actually did go and create their own culture completely on their own, separate from other animals—the AI will simply be one input to the human economy.
This AI would be important input to our economy for sure, but not a completely separate entity producing its own distinct civilization, like the prototypical AI that spins up nanobot factories and kills us all within 3 minutes. It will be more like the brilliant professor, or easily-copyable-worker. In other words, it might speed up our general civilizational abilities to develop technology, and greatly enhance our productive capabilities. But it won’t, on its own, discontinuously produce technology 2.0 (where 1.0 was humans and animals roughly are technology 0.0).
I think a superintelligent AI can FOOM its way to manufacturing nanobots because the biggest bottleneck to engineering and manufacturing those is research that can be done without needing input from the physical universe beyond the physics we already know, and the machines we already have, with very slight upgrades or creative usages beyond what they were designed for. Manufacturing nanobots is like a logic brain teaser for a sufficiently intelligent reasoner. I guess you have a different perspective in that you think the process requires a culture of socializing beings, and/or more input from the physical universe?
I might be wrong about this, but my impression was that the rise of human culture and civilization was timed with the end of the Pleistocene, rather than timed with the development of better (and more general) brains.
My guess is that modern humans probably do have more general brains than Homo erectus that came before us. But if Homo erectus had not been living in a geological epoch of repeated glaciations, then perhaps we would have seen a simpler Homo erectus civilization?
In general, I don’t yet see a strong reason to think that our general brain architecture is the sole, or potentially even primary reason why we’ve developed civilization, discontinuous with the rest of the animal kingdom. A strong requirement for civilization is the development of cultural accumulation via language, and more specifically, the ability to accumulate knowledge and technology over generations. Just having a generalist brain doesn’t seem like enough; for example, could there have been a dolphin civilization?
If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying “Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it’s an hour or twelve days, given anything like current setups.” Also insert boilerplate “essentially constant human brain architectures, no recursive self-improvement, evolutionary difficulty curves bound above human difficulty curves, etc.” for more despair.
I guess even though I don’t disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don’t see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don’t even approach the physical limits on speed, we can’t run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn’t apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization. I am deeply impressed by what has come out of the bare minimum of human innovative ability plus cultural accumulation. You say “The engine is slow,” I say “The engine hasn’t stalled, and look how easy it is to speed up!”
I’m not sure I like using the word ‘discontinuous’ to describe any real person’s position on plausible investment-output curves any longer; people seem to think it means “intermediate value theorem doesn’t apply,” (which seems reasonable) when usually hard/fast takeoff proponents really mean “intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals.”
That was a pretty good Eliezer model; for a second I was trying to remember if and where I’d said that.
I agree with this, and I think that you are hitting on a key a reason that these debates don’t hinge on what the true story of the human intelligence explosion ends up being. Whichever of these is closer to the truth
a) the evolution of individually smarter humans using general reasoning ability was the key factor
b) the evolution of better social learners and the accumulation of cultural knowledge was the key factor
...either way, there’s no reason to think that AGI has to follow the same kind of path that humans did. I found an earlier post on the Henrich model of the evolution of intelligence, Musings on Cumulative Cultural Evolution and AI. I agree with Rohin Shah’s takeaway on that post :
(To be clear, the thing you quoted was commenting on the specific argument presented in that post. I do expect that in practice AI will need social learning, simply because that’s how an AI system could make use of the existing trove of knowledge that humans have built.)
FWIW when I use the word discontinuous in these contexts, I’m almost always referring to the definition Katja Grace uses,
This is quite different than the mathematical definition of continuous.
In The Secrets of Our Success, Joe Henrich argues that without our stock of cultural knowledge, individual humans are not particularly more generally intelligent than apes. (Neanderthals may very well have been more generally intelligent than humans—and indeed, their brains are bigger than ours.)
And, he claims, to the extent that individual humans are now especially intelligent, this was because of culture-driven natural selection. For Henrich, the story of human uniqueness is a story of a feedback loop: increased cultural know-how, which drives genetic selection for bigger brains and better social learning, which leads to increased cultural know-how, which drives genetic selection for bigger brains….and so forth, until you have a very weird great ape that is weak, hairless, and has put a flag on the moon.
Note: this evolution + culture feedback loop is still a huge discontinuity that led to massive changes in relatively short evolutionary time!
Heinrich speculates that a bunch of idiosyncratic features came together to launch us into the feedback loop that led to us being cultural species. Most species, including dolphins, do not get onto this feedback loop because of a “startup” problem: bigger brains will give a fitness advantage only up to a certain point, because individual learning can only be so useful. For there to be further selection for bigger brains, you need a stock of cultural know-how (cooking, hunting, special tools) that makes individual learning very important for fitness. But, to have a stock of cultural know-how, you need big brains.
Heinrich speculates that humans overcame the startup problem due to a variety of factors that came together when we descended from the trees and started living on the ground. The important consequences of a species being on the ground (as opposed to in the trees):
It frees up your hands for tool use. Captive chimps, which are more “grounded” than wild chimps, make more tools.
It’s easier for you to find tools left by other people.
It’s easier for you to see what other people are doing and hang out with them. (“Hang out” being inapt, since that’s precisely not what you’re doing).
You need to group up with people to survive, since there are terrifying predators on the ground. Larger groups offer protection; these larger groups will accelerate the process of people messing around with tools and imitating each other.
Larger groups also produce new forms of social organization. Apparently, in smaller groups of chimps, the reproductive strategy that every male tries to follow is “fight as many males as you can for mating opportunities.” But in a larger group, it becomes better for some males to try to pair bond – to get multiple reproductive opportunities with one female, by hanging around her and taking care of her.
Pair bonding in turn allows for more kinship relationships. Kinship relationships mean you grow up around more people; this accelerates learning. Kinship also allows for more genetic selection for big-brained, slow-developing learners: it becomes less prohibitively costly to give birth to big-brained, slow-growing children, because more people are around to help out and pool food resources.
This story is, by Henrich’s own account, quite speculative. You can find it in Chapter 16 of the book.
I 75% agree with this, but I do think that individual humans are smarter than individual chimpanzees. A big area of disagreement is distinguishing between “intrinsic ability to innovate” vs. “ability to process culture”, and whether it’s even possible to distinguish the two. I wrote a post about this two years ago.
This is the big crux for me on the evolution of humans and its relevance to the foom debate.
Roughly, I think Henrich’s model is correct. I think his model provides a simple, coherent explanation for why humans dominate the world, and why it happened on such a short timescale, discontinuously with other animals.
Of course, intelligence plays a large role on his model: you can’t get ants who can go to the moon, no matter how powerful their culture. But the the great insight is that our power does not come from our raw intelligence: it comes from our technology/culture, which is so powerful because it was allowed to accumulate.
Cultural accumulation is a zero-to-one discontinuity. That is, you can go a long time without any of it, and then something comes along that’s able to do it just a little bit and then shortly after, it blows up. But after you’ve already reached one, going from “being able to accumulate culture at all” to “being able to accumulate it slightly faster” does not give you the same discontinuous foom as before.
We could, for example, imagine that an AI that can accumulate culture slightly faster than other humans. Since this AI is only slightly better than humans, however, it doesn’t go and create its own culture on its own. Unlike the humans—who actually did go and create their own culture completely on their own, separate from other animals—the AI will simply be one input to the human economy.
This AI would be important input to our economy for sure, but not a completely separate entity producing its own distinct civilization, like the prototypical AI that spins up nanobot factories and kills us all within 3 minutes. It will be more like the brilliant professor, or easily-copyable-worker. In other words, it might speed up our general civilizational abilities to develop technology, and greatly enhance our productive capabilities. But it won’t, on its own, discontinuously produce technology 2.0 (where 1.0 was humans and animals roughly are technology 0.0).
I think a superintelligent AI can FOOM its way to manufacturing nanobots because the biggest bottleneck to engineering and manufacturing those is research that can be done without needing input from the physical universe beyond the physics we already know, and the machines we already have, with very slight upgrades or creative usages beyond what they were designed for. Manufacturing nanobots is like a logic brain teaser for a sufficiently intelligent reasoner. I guess you have a different perspective in that you think the process requires a culture of socializing beings, and/or more input from the physical universe?