as late as 1000 years ago, the fastest creatures on Earth are not humans, because you need even more G than that to go faster than cheetahs
...as a matter of fact there is no one investing in making better cheetahs
A little puzzled by Paul’s history here. Humans invested exorbitant amounts of money and effort into making better cheetahs, in the sense of ‘trying to be able to run much faster and become the fastest creatures on earth’; we call those manufactured cheetahs, “horses”. For literally thousands of years, breeding horses has been a central preoccupation of many civilizations and their elites (and horses themselves were merely one of many animals tried by speed-craving royalty & herders—I was intrigued to learn that capturing wild asses to hybridize into “kunga” was recently confirmed by fossil DNA), which is unsurprising because countries could rise and fall based on their mastery of ‘gotta go fast’ via cart or chariot warfare or horseback archery. Obtaining fast reliable horses could drive all sorts of things like regional trade patterns (eg. the Tibetan tea-horse trade). Even civilizations renowned for their infantry like Greece/Rome still invested huge fortunes into cavalry wings. Where they weren’t militarized, they might be the obsession of the post-military aristocracy—the ruin of the British aristocrat was drunken cards, loose women, and fast horses.
Horses, you may notice, still do not sprint faster than cheetahs; this is in part because our methods sucked (in considerable part due to deep misunderstandings of genetics, “humans are great at breeding animals” is not true, and the inauguration of thoroughbred racing led to rapid gains for a while despite all the millennia of pseudo-breeding before), and in part because for physics reasons horses probably can’t surpass cheetahs after all no matter how much you spent (and spent, and spent) and so it worked a lot better to apply our collective brains to the invention of cars and planes etc, and now humans really are the fastest creatures on Earth. There’s a lot you could say about that. But not “no one was investing in better cheetahs”.
Humans invested exorbitant amounts of money and effort into making better cheetahs, in the sense of ‘trying to be able to run much faster and become the fastest creatures on earth’; we call those manufactured cheetahs, “horses”.
I don’t think Paul is talking about that. Consider the previous lines (which seem like they could describe animal breeding to me):
and you think that G doesn’t help you improve on muscles and tendons?
until you have a big pile of it?
and Eliezer’s response in the following lines:
the natural selection of cheetahs is investing in it
it’s not doing so by copying humans because of fundamental limitations
however if we replace it with an average human investor, it still doesn’t copy humans, why would it
As I understand the conversation, Eliezer is trying to draw a connection between two different situations:
The natural selection of species on Earth (with various species represented as ‘firms’)
The market of AGI development on Earth
An important element of this comparison is that from the position of ignorance before G turns out to be hugely valuable in situation 1, the equivalent of ‘investors’ are doing backward-looking reasoning, where MUSCLES and TENDONS are the core features that are relevant for speed. In situation 2, the ‘average human investors’ will be playing the same game, throwing money at projects already proven to work instead of projects which haven’t worked yet but will.
In that particular line, Paul is (while he’s trying to figure out what comparison Eliezer is even trying to make) noting that there aren’t ‘investors’ in situation 1. The thing where humans use G to get better MUSCLES and TENDONS (the breeding you’re talking about) is part of the “fingersnap end of the world,” not the long story of evolution of cheetahs.
[I see Paul as generally arguing that the road to AGI will be ‘obvious in advance’, in some important sense, and will get something like a normal economic return for that sort of research investment. I see Eliezer as trying to argue that “no, AGI can come out of left field, because the mechanism that causes it to be possible will be some weird insight that is not well-priced before discovered.” Like, fundamentally the question is something like “how efficient and accurate is the AI research market?”]
I agree with your framing, and I think it shows Paul is wrong, leaving aside the specifics of the cheetah thing. Looking back, humans pursued both paths, the path of selecting cheetahs (horses) and of using G to look for completely different paradigms that blow away cheetahs. (Since we aren’t evolution, we aren’t restricted to picking just one approach.) And we can see the results today: when was the last time you rode a horse?
If you had invested in ‘the horse economy’ a century ago and bought the stock of bluechip buggywhip manufacturers instead of aerospace & automobile, I don’t think you would have done well, because it turns out that horses, even though we have extensively bred them to literally be almost as fast as cheetahs (increasing their speeds by easily a quarter just since the mid-1800s), and pushed them far behind their skittish wild pony ancestors (exemplified by Przewalski’s horse), were still blown away by rapid technological developments in other transport methods. If you made projections on the future* of warfare or the economy based on the assumption of smooth obvious progress of ever more refined horse-breeding with plausible asymptoting of their capabilities given fundamental biological limits, well—your forecasts were utterly wrong.
* Trivia: in his Book of the New Sun, Gene Wolfe imagines a world where cavalry developed did continue, in part due to technological regression; in Castle of Days, he explains that part of his setting and he argues that far-future genetically-engineered horses, or “destriers”, could get up from quarter horse speeds to closer to 100mph, in which case cavalry becomes a viable alternative to mechanized warfare as they are too fast for regular machine gunnery to stop. (He further concludes: “Thus we have a return to full-fledged cavalry warfare, with dragoons, scouts, and charges. If it sounds fantastic, it should not; it is merely a revival of military techniques that have been in abeyance for scarcely a hundred years. Wait until genetic engineering really gets going and someone questions the need for separable mounts and riders. Fighting centaurs! (Sometimes it almost seems as if the Greeks …)”)
I absolutely agree that there are usually multiple ways to do something, often one of them improves faster than current SOTA, and that the faster one often overtakes the slower improving one. I may be misunderstanding what you are taking away from the horses analogy. I don’t think this undermines my point (or at least I don’t yet see the connection).
I absolutely agree that there are usually multiple ways to do something, often one of them improves faster than current SOTA, and that the faster one often overtakes the slower improving one. I may be misunderstanding what you are taking away from the horses analogy.
The takeaway seems… really obvious to me? In fact, it seems to me that the bolded first sentence of your quote basically is the takeaway: that there are multiple ways of doing things, some of which are faster than others.
This really does seem to me to be all you need to argue for a “discontinuity”: just have your timeline play out such that faster way of doing things is discovered after the slower way, and then boom, you have a discontinuity in the rate of progress, located at precisely the point where people switch from the slower way to the faster way. The horses analogy establishes this idea perfectly well in my view, but it seems almost… unnecessary? Like, this is a really obvious point?
And so, like, the question from my perspective is, why would this not be relevant to the idea of slow versus fast takeoff? It continues to remain perfectly plausible to me that a “faster way” of improving “G” exists, and that once this “faster way” is discovered, it’s so much faster that it basically obsoletes existing methods in roughly the same way that aerospace & automobile technology obsoleted horses. You say that your model doesn’t forbid this, but from my perspective, that… really sounds like you’re just conceding the whole argument right there.
Of course, presumably you don’t think you’re conceding the argument. So what’s the remaining disagreement? Is it really just a question of what relative probabilities to assign to such scenarios versus alternative scenarios, i.e. what counts as “plausible”? I’ll be honest and say that, unless your probabilities on fast-takeoff-style scenarios are really low, this seems like a pretty pointless line of disagreement to take; conversely, if your probabilities are that low, that seems to me like it’d require positive knowledge of what future AI development will look like, in a low-level, detailed way that I’m pretty sure is not justified by looking at GDP curves or the like. (Also, this comment from you seems to make it pretty clear that your probabilities are not that low.)
I remain confused, even after reading pretty much everything you’ve publicly written on this topic.
Note that I say this multiple times in the dialog and I agree it’s obvious. It also happens all the time in continuous trajectories, so if you think it should lead to discontinuities with high probability it seems like you have a lot of retrodicting-reality to do.
There are ways of doing things that improve faster, but usually they start off worse. Then they get better, and at some point they overtake the slower-improving alternatives, after which subsequent progress is faster at least for a while.
Sometimes there are exceptions where once something is possible it is necessarily much better than the predecessors (e.g. if there is a fixed cost equal to a significant percentage of GDP, and you can’t trade off fixed costs vs marginal costs). But this doesn’t happen very much, which isn’t so surprising on paper.
I don’t think any of this leads to a fast takeoff in theory. Also the view “>30% of progress is stuff from left field that jumps over the competition” doesn’t seem at all plausible to me.
in roughly the same way that aerospace & automobile technology obsoleted horses.
I’m totally happy with some better form of future AI being like automobiles, and indeed I think it’s extremely likely that AI broadly will replace humans in an automobile-like way. It just seems to me like automobiles obsoleted horses slowly, with lots of crappy automobiles well before usable automobiles. (I don’t know much about this case in particular so open to correction, but it seems like common sense / folklore; e.g. see wikipedia.)
There are ways of doing things that improve faster, but usually they start off worse. Then they get better, and at some point they overtake the slower-improving alternatives, after which subsequent progress is faster at least for a while.
I don’t see how the faster-improving technology starting off worse doesn’t simply strengthen the case for fast takeoff. While it’s worse, fewer resources will be invested into it relative to the current best thing, which leaves more room for rapid improvement once a paradigm shift finally occurs and people start switching resources over.
This seems to basically be what happened with automobiles v horses; yes if you specifically look at a Wikipedia article titled “history of the automobile” you will find crappy historical precedents to the (modern) automobile, but the point is precisely that those crappy precedents received comparatively little attention, and therefore did not obsolete horses, until suddenly they became not-so-crappy and… well, did.
I’m not exactly sure what the line of reasoning is here that leads you to look at the existence of crappy historical precedents and conclude, “oh, I guess the fact that these existed means progress wasn’t so discontinuous after all!” instead of, “hm, these crappy historical precedents existed and did basically nothing for a long time, which certainly implies a discontinuity at the point where they suddenly took over”; but from the perspective of someone invested in (what gwern described as) the “horse economy”, the latter would probably be a much more relevant takeaway than the former.
Sometimes there are exceptions where once something is possible it is necessarily much better than the predecessors (e.g. if there is a fixed cost equal to a significant percentage of GDP, and you can’t trade off fixed costs vs marginal costs). But this doesn’t happen very much, which isn’t so surprising on paper.
I don’t think any of this leads to a fast takeoff in theory. Also the view “>30% of progress is stuff from left field that jumps over the competition” doesn’t seem at all plausible to me.
I don’t disagree with any of what you say here; it just doesn’t seem very relevant to me. As you say, something new coming from left field and jumping over the competition is a rare occurrence; certainly not anywhere near 30%. The problem is that the impact distribution of new technologies is heavy-tailed, meaning that you don’t need a >30% proportion of new technologies that do the whole “obsoleting” thing, to get a hugelyoutsized impact from the few that do. Like, it seems to me that the quoted argument could have been made almost word-for-word by someone invested in the “horse economy” in the late 1800s, and it would have nonetheless done nothing to prevent them from being blindsided by the automobile economy.
Which brings me back to the point about needing positive knowledge about the new technology in question, if you want to have any hope of not being blindsided. Without positive knowledge, you’re reduced to guessing based on base rates, which again puts you in the position of the horse investor. Fundamentally I don’t see an escape to this: you can’t draw conclusions about the “physics” of new technologies by looking at GDP curves on graphs; those curves don’t (and can’t) reflect phenomena that haven’t been discovered yet.
How about discontinuities like inventing algorithms? I think often performance on a task gets jumps of O(n^k) to O(n^k’) with k’ < k, or even O(k^n) to O(k’^n). I’d guess you’d say that these sorts of jumps would smooth out by being aggregrated. But I guess I don’t see why you think that the level at which jumps from algorithmic invention happen, is enough “lower” than the level at which meaningful progress towards TAI happens, for this smoothing out to happen.
(Or maybe, do you think jumps like this don’t happen (because in practice there’s intermediate bad versions of new algorithmic ideas), or don’t represent much discontinuity (because they’re invented when the task in question is in a regime where the performance is still comparable, or something), or aren’t similar to inventions of cognitive algorithms (e.g. because cognitive stuff is more like accumulating content or something)?)
What kind of example do you have in mind? Even for algorithmic problems with relatively small communities (and low $ invested) I think it’s still pretty rare to have big jumps like this (e..g measuring performance for values of n at the scale where it can be run on conventional communities). I’m thinking of domains like combinatorial optimization and constraint satisfaction, convex optimization, graph algorithms. In most cases I think you get to a pretty OK algorithm quite quickly and further progress is slow. Exceptions tend to be cases like “there was an algorithm that works well in practice but not the worst case” or “the old algorithm got an approximation ratio of 0.87 but the new one gets an approximation ratio of 0.92 and if you absolutely require 0.9 the new one is very much faster” or extremely niche problems. But my knowledge isn’t that deep.
(And maybe what I’d say quantitatively is that something like ~10% of the log-space progress on problems people care about comes from big jumps vs 90% from relatively smooth improvements, with the number getting lower for stricter notions of “people care about” and the step size for “relatively smooth improvement” being defined by how many people are working on it.)
(I’m sure you know more than I do about algorithms.)
What kind of example do you have in mind?
~10% of the log-space progress on problems people care about comes from big jumps vs 90% from relatively smooth improvements,
I’m thinking of the difference between insertion sort / bubble sort vs radix sort / merge sort.
(Knuth gives an interesting history here (Art of Programming Vol 3, section 5.5, p 383); apparently in 1890 the US census data was processed using the radix sorting algorithm running on a mechanical-electronic-human hybrid. There was an order-preserving card-stack merging machine in 1938. Then in 1945, von Neumann wrote down a merge sort, while independently Zuse wrote down an insertion sort.)
I guess we’re talking past each other because we’re answering different versions of “What is continuous in what?”. Performance on a task can be, and is, much more continuous in time than “ideas” are continuous in time, because translating ideas into performance on a task takes resources (money, work, more ideas). So I concede that what I said here:
I think often performance on a task gets jumps
was mostly incorrect, if we don’t count the part where
you get to a pretty OK algorithm quite quickly
So one question is, is TAI driven by ideas that will have a stage where they get to a pretty okay version quite quickly once the “idea” is there, or no, or what? Another question is, do you think “ideas” are discontinuous?
Like, fundamentally the question is something like “how efficient and accurate is the AI research market?”
I would distinguish two factors:
How powerful and well-directed is the field’s optimization?
How much does the technology inherently lend itself to information asymmetries?
You could turn the “powerful and well-directed” dial up to the maximum allowed by physics, and still not thereby guarantee that information asymmetries are rare, because the way that a society applies maximum optimization pressure to reaching AGI ASAP might route through a lot of individuals and groups going down different rabbit holes. A researcher could be rationally optimistic about her rabbit hole based on specialized knowledge or experience that’s hard to instantly transmit to investors, the field as a whole, etc.
I don’t think this is relevant to the disanalogy I was trying to make, which was between natural selection and investors. It seems like I’m thinking about the comparison in a different way here. Hopefully this explains your puzzlement.
That said, since I can’t resist responding to random comments: are horses really being bred for sprinting as fast as they can for 20-30 seconds? (Isn’t that what cheetahs are so good at?) What is the military/agricultural/trade context in which that is relevant? Who cares other than horse racers? Over any of the distances where people are using horses I would expect them to be considerably faster than cheetahs even if both are unburdened. I don’t know much about horses though.
That said, since I can’t resist responding to random comments: are horses really being bred for sprinting as fast as they can for 20-30 seconds?
Yes, they were, and they still are. Cavalry charges are not that long*, and even if you want to absurdly nitpick on this exact basis where 20-30 seconds counts but 30-40s doesn’t, well, as it happens, 20-30s is exactly about how long quarter horse races last. (Quarter horses, incidentally, now reach the low end of cheetah top speeds: 55mph, vs ~60mph. So depending on which pair of horses & cheetahs you compare, we did succeed in breeding a better cheetah, because you can’t ride a cheetah at any speed and they have much worse endurance etc. I’m not going to insist on this point, however, because I think it’s unnecessary, even if it should make you a little more humble in your assertions about what humans have and have not done.)
Who cares other than horse racers?
First, what’s wrong with horse racers? It’s the sport of kings. (The causality in that phrase going both directions.) Horse racers are real, they do in fact exist, I have met them in the flesh and will testify that they are not “no one”. You disdained the existence of any organized large human investments into making extremely fast animals. The millions of quarter horse owners dating back to the 1600s, as well as all the horse breeders and horse racers throughout human history, beg to differ, as they are just as valid an example as any other human would be of such things existing—quite aside from reasons everyone else might care like minor things like “civilizations rose and fell on how well they did this”.
at is the military/agricultural/trade context in which that is relevant?
I dunno. Since you didn’t specify, apparently the relevance of the context wasn’t relevant to your point.
* They don’t need to be. Think about how far accurate arrow fire goes, and how long it takes a horse to cover that distance if it’s sprinting at 30MPH+.
I was saying that natural selection is not a human investor and behaves differently, responding to Eliezer saying “not as a metaphor but as simple historical fact, that’s how it played out.” I’m sorry if the exchange was unclear (but hopefully not surprising since it was a line of chat in a fast dialog written in about 3 seconds.) I think that you have to make an analogy because the situation is not obviously structurally identical and there are different analogies you could draw here and it was not clear which one he was making.
I’m sorry I engaged about horse breeding (I think it was mostly a distraction).
That said, since I can’t resist responding to random comments: are horses really being bred for sprinting as fast as they can for 20-30 seconds? (Isn’t that what cheetahs are so good at?) What is the military/agricultural/trade context in which that is relevant? Who cares other than horse racers? Over any of the distances where people are using horses I would expect them to be considerably faster than cheetahs even if both are unburdened. I don’t know much about horses though.
My understanding is that the primary military use of horses in Europe for elites was charges into massed infantry, which were not all that much longer than 30 seconds (rarely more than a few minutes). I would expect them to care more about things like carrying capacity and psychology than sprinting speed (as you want to stay in formation, and be able to break thru even if they don’t break formation). Other places focused on horse archers, which involved the horses moving rapidly for much longer periods of time, or skirmishers, where you cared more about the cheetah-like ability to run down someone trying to get away from you.
A little puzzled by Paul’s history here. Humans invested exorbitant amounts of money and effort into making better cheetahs, in the sense of ‘trying to be able to run much faster and become the fastest creatures on earth’; we call those manufactured cheetahs, “horses”. For literally thousands of years, breeding horses has been a central preoccupation of many civilizations and their elites (and horses themselves were merely one of many animals tried by speed-craving royalty & herders—I was intrigued to learn that capturing wild asses to hybridize into “kunga” was recently confirmed by fossil DNA), which is unsurprising because countries could rise and fall based on their mastery of ‘gotta go fast’ via cart or chariot warfare or horseback archery. Obtaining fast reliable horses could drive all sorts of things like regional trade patterns (eg. the Tibetan tea-horse trade). Even civilizations renowned for their infantry like Greece/Rome still invested huge fortunes into cavalry wings. Where they weren’t militarized, they might be the obsession of the post-military aristocracy—the ruin of the British aristocrat was drunken cards, loose women, and fast horses.
Horses, you may notice, still do not sprint faster than cheetahs; this is in part because our methods sucked (in considerable part due to deep misunderstandings of genetics, “humans are great at breeding animals” is not true, and the inauguration of thoroughbred racing led to rapid gains for a while despite all the millennia of pseudo-breeding before), and in part because for physics reasons horses probably can’t surpass cheetahs after all no matter how much you spent (and spent, and spent) and so it worked a lot better to apply our collective brains to the invention of cars and planes etc, and now humans really are the fastest creatures on Earth. There’s a lot you could say about that. But not “no one was investing in better cheetahs”.
I don’t think Paul is talking about that. Consider the previous lines (which seem like they could describe animal breeding to me):
and Eliezer’s response in the following lines:
As I understand the conversation, Eliezer is trying to draw a connection between two different situations:
The natural selection of species on Earth (with various species represented as ‘firms’)
The market of AGI development on Earth
An important element of this comparison is that from the position of ignorance before G turns out to be hugely valuable in situation 1, the equivalent of ‘investors’ are doing backward-looking reasoning, where MUSCLES and TENDONS are the core features that are relevant for speed. In situation 2, the ‘average human investors’ will be playing the same game, throwing money at projects already proven to work instead of projects which haven’t worked yet but will.
In that particular line, Paul is (while he’s trying to figure out what comparison Eliezer is even trying to make) noting that there aren’t ‘investors’ in situation 1. The thing where humans use G to get better MUSCLES and TENDONS (the breeding you’re talking about) is part of the “fingersnap end of the world,” not the long story of evolution of cheetahs.
[I see Paul as generally arguing that the road to AGI will be ‘obvious in advance’, in some important sense, and will get something like a normal economic return for that sort of research investment. I see Eliezer as trying to argue that “no, AGI can come out of left field, because the mechanism that causes it to be possible will be some weird insight that is not well-priced before discovered.” Like, fundamentally the question is something like “how efficient and accurate is the AI research market?”]
I agree with your framing, and I think it shows Paul is wrong, leaving aside the specifics of the cheetah thing. Looking back, humans pursued both paths, the path of selecting cheetahs (horses) and of using G to look for completely different paradigms that blow away cheetahs. (Since we aren’t evolution, we aren’t restricted to picking just one approach.) And we can see the results today: when was the last time you rode a horse?
If you had invested in ‘the horse economy’ a century ago and bought the stock of bluechip buggywhip manufacturers instead of aerospace & automobile, I don’t think you would have done well, because it turns out that horses, even though we have extensively bred them to literally be almost as fast as cheetahs (increasing their speeds by easily a quarter just since the mid-1800s), and pushed them far behind their skittish wild pony ancestors (exemplified by Przewalski’s horse), were still blown away by rapid technological developments in other transport methods. If you made projections on the future* of warfare or the economy based on the assumption of smooth obvious progress of ever more refined horse-breeding with plausible asymptoting of their capabilities given fundamental biological limits, well—your forecasts were utterly wrong.
* Trivia: in his Book of the New Sun, Gene Wolfe imagines a world where cavalry developed did continue, in part due to technological regression; in Castle of Days, he explains that part of his setting and he argues that far-future genetically-engineered horses, or “destriers”, could get up from quarter horse speeds to closer to 100mph, in which case cavalry becomes a viable alternative to mechanized warfare as they are too fast for regular machine gunnery to stop. (He further concludes: “Thus we have a return to full-fledged cavalry warfare, with dragoons, scouts, and charges. If it sounds fantastic, it should not; it is merely a revival of military techniques that have been in abeyance for scarcely a hundred years. Wait until genetic engineering really gets going and someone questions the need for separable mounts and riders. Fighting centaurs! (Sometimes it almost seems as if the Greeks …)”)
I absolutely agree that there are usually multiple ways to do something, often one of them improves faster than current SOTA, and that the faster one often overtakes the slower improving one. I may be misunderstanding what you are taking away from the horses analogy. I don’t think this undermines my point (or at least I don’t yet see the connection).
The takeaway seems… really obvious to me? In fact, it seems to me that the bolded first sentence of your quote basically is the takeaway: that there are multiple ways of doing things, some of which are faster than others.
This really does seem to me to be all you need to argue for a “discontinuity”: just have your timeline play out such that faster way of doing things is discovered after the slower way, and then boom, you have a discontinuity in the rate of progress, located at precisely the point where people switch from the slower way to the faster way. The horses analogy establishes this idea perfectly well in my view, but it seems almost… unnecessary? Like, this is a really obvious point?
And so, like, the question from my perspective is, why would this not be relevant to the idea of slow versus fast takeoff? It continues to remain perfectly plausible to me that a “faster way” of improving “G” exists, and that once this “faster way” is discovered, it’s so much faster that it basically obsoletes existing methods in roughly the same way that aerospace & automobile technology obsoleted horses. You say that your model doesn’t forbid this, but from my perspective, that… really sounds like you’re just conceding the whole argument right there.
Of course, presumably you don’t think you’re conceding the argument. So what’s the remaining disagreement? Is it really just a question of what relative probabilities to assign to such scenarios versus alternative scenarios, i.e. what counts as “plausible”? I’ll be honest and say that, unless your probabilities on fast-takeoff-style scenarios are really low, this seems like a pretty pointless line of disagreement to take; conversely, if your probabilities are that low, that seems to me like it’d require positive knowledge of what future AI development will look like, in a low-level, detailed way that I’m pretty sure is not justified by looking at GDP curves or the like. (Also, this comment from you seems to make it pretty clear that your probabilities are not that low.)
I remain confused, even after reading pretty much everything you’ve publicly written on this topic.
Note that I say this multiple times in the dialog and I agree it’s obvious. It also happens all the time in continuous trajectories, so if you think it should lead to discontinuities with high probability it seems like you have a lot of retrodicting-reality to do.
There are ways of doing things that improve faster, but usually they start off worse. Then they get better, and at some point they overtake the slower-improving alternatives, after which subsequent progress is faster at least for a while.
Sometimes there are exceptions where once something is possible it is necessarily much better than the predecessors (e.g. if there is a fixed cost equal to a significant percentage of GDP, and you can’t trade off fixed costs vs marginal costs). But this doesn’t happen very much, which isn’t so surprising on paper.
I don’t think any of this leads to a fast takeoff in theory. Also the view “>30% of progress is stuff from left field that jumps over the competition” doesn’t seem at all plausible to me.
I’m totally happy with some better form of future AI being like automobiles, and indeed I think it’s extremely likely that AI broadly will replace humans in an automobile-like way. It just seems to me like automobiles obsoleted horses slowly, with lots of crappy automobiles well before usable automobiles. (I don’t know much about this case in particular so open to correction, but it seems like common sense / folklore; e.g. see wikipedia.)
I don’t see how the faster-improving technology starting off worse doesn’t simply strengthen the case for fast takeoff. While it’s worse, fewer resources will be invested into it relative to the current best thing, which leaves more room for rapid improvement once a paradigm shift finally occurs and people start switching resources over.
This seems to basically be what happened with automobiles v horses; yes if you specifically look at a Wikipedia article titled “history of the automobile” you will find crappy historical precedents to the (modern) automobile, but the point is precisely that those crappy precedents received comparatively little attention, and therefore did not obsolete horses, until suddenly they became not-so-crappy and… well, did.
I’m not exactly sure what the line of reasoning is here that leads you to look at the existence of crappy historical precedents and conclude, “oh, I guess the fact that these existed means progress wasn’t so discontinuous after all!” instead of, “hm, these crappy historical precedents existed and did basically nothing for a long time, which certainly implies a discontinuity at the point where they suddenly took over”; but from the perspective of someone invested in (what gwern described as) the “horse economy”, the latter would probably be a much more relevant takeaway than the former.
I don’t disagree with any of what you say here; it just doesn’t seem very relevant to me. As you say, something new coming from left field and jumping over the competition is a rare occurrence; certainly not anywhere near 30%. The problem is that the impact distribution of new technologies is heavy-tailed, meaning that you don’t need a >30% proportion of new technologies that do the whole “obsoleting” thing, to get a hugely outsized impact from the few that do. Like, it seems to me that the quoted argument could have been made almost word-for-word by someone invested in the “horse economy” in the late 1800s, and it would have nonetheless done nothing to prevent them from being blindsided by the automobile economy.
Which brings me back to the point about needing positive knowledge about the new technology in question, if you want to have any hope of not being blindsided. Without positive knowledge, you’re reduced to guessing based on base rates, which again puts you in the position of the horse investor. Fundamentally I don’t see an escape to this: you can’t draw conclusions about the “physics” of new technologies by looking at GDP curves on graphs; those curves don’t (and can’t) reflect phenomena that haven’t been discovered yet.
How about discontinuities like inventing algorithms? I think often performance on a task gets jumps of O(n^k) to O(n^k’) with k’ < k, or even O(k^n) to O(k’^n). I’d guess you’d say that these sorts of jumps would smooth out by being aggregrated. But I guess I don’t see why you think that the level at which jumps from algorithmic invention happen, is enough “lower” than the level at which meaningful progress towards TAI happens, for this smoothing out to happen.
(Or maybe, do you think jumps like this don’t happen (because in practice there’s intermediate bad versions of new algorithmic ideas), or don’t represent much discontinuity (because they’re invented when the task in question is in a regime where the performance is still comparable, or something), or aren’t similar to inventions of cognitive algorithms (e.g. because cognitive stuff is more like accumulating content or something)?)
What kind of example do you have in mind? Even for algorithmic problems with relatively small communities (and low $ invested) I think it’s still pretty rare to have big jumps like this (e..g measuring performance for values of n at the scale where it can be run on conventional communities). I’m thinking of domains like combinatorial optimization and constraint satisfaction, convex optimization, graph algorithms. In most cases I think you get to a pretty OK algorithm quite quickly and further progress is slow. Exceptions tend to be cases like “there was an algorithm that works well in practice but not the worst case” or “the old algorithm got an approximation ratio of 0.87 but the new one gets an approximation ratio of 0.92 and if you absolutely require 0.9 the new one is very much faster” or extremely niche problems. But my knowledge isn’t that deep.
(And maybe what I’d say quantitatively is that something like ~10% of the log-space progress on problems people care about comes from big jumps vs 90% from relatively smooth improvements, with the number getting lower for stricter notions of “people care about” and the step size for “relatively smooth improvement” being defined by how many people are working on it.)
(I’m sure you know more than I do about algorithms.)
I’m thinking of the difference between insertion sort / bubble sort vs radix sort / merge sort.
(Knuth gives an interesting history here (Art of Programming Vol 3, section 5.5, p 383); apparently in 1890 the US census data was processed using the radix sorting algorithm running on a mechanical-electronic-human hybrid. There was an order-preserving card-stack merging machine in 1938. Then in 1945, von Neumann wrote down a merge sort, while independently Zuse wrote down an insertion sort.)
I guess we’re talking past each other because we’re answering different versions of “What is continuous in what?”. Performance on a task can be, and is, much more continuous in time than “ideas” are continuous in time, because translating ideas into performance on a task takes resources (money, work, more ideas). So I concede that what I said here:
was mostly incorrect, if we don’t count the part where
So one question is, is TAI driven by ideas that will have a stage where they get to a pretty okay version quite quickly once the “idea” is there, or no, or what? Another question is, do you think “ideas” are discontinuous?
I would distinguish two factors:
How powerful and well-directed is the field’s optimization?
How much does the technology inherently lend itself to information asymmetries?
You could turn the “powerful and well-directed” dial up to the maximum allowed by physics, and still not thereby guarantee that information asymmetries are rare, because the way that a society applies maximum optimization pressure to reaching AGI ASAP might route through a lot of individuals and groups going down different rabbit holes. A researcher could be rationally optimistic about her rabbit hole based on specialized knowledge or experience that’s hard to instantly transmit to investors, the field as a whole, etc.
I don’t think this is relevant to the disanalogy I was trying to make, which was between natural selection and investors. It seems like I’m thinking about the comparison in a different way here. Hopefully this explains your puzzlement.
That said, since I can’t resist responding to random comments: are horses really being bred for sprinting as fast as they can for 20-30 seconds? (Isn’t that what cheetahs are so good at?) What is the military/agricultural/trade context in which that is relevant? Who cares other than horse racers? Over any of the distances where people are using horses I would expect them to be considerably faster than cheetahs even if both are unburdened. I don’t know much about horses though.
Yes, they were, and they still are. Cavalry charges are not that long*, and even if you want to absurdly nitpick on this exact basis where 20-30 seconds counts but 30-40s doesn’t, well, as it happens, 20-30s is exactly about how long quarter horse races last. (Quarter horses, incidentally, now reach the low end of cheetah top speeds: 55mph, vs ~60mph. So depending on which pair of horses & cheetahs you compare, we did succeed in breeding a better cheetah, because you can’t ride a cheetah at any speed and they have much worse endurance etc. I’m not going to insist on this point, however, because I think it’s unnecessary, even if it should make you a little more humble in your assertions about what humans have and have not done.)
First, what’s wrong with horse racers? It’s the sport of kings. (The causality in that phrase going both directions.) Horse racers are real, they do in fact exist, I have met them in the flesh and will testify that they are not “no one”. You disdained the existence of any organized large human investments into making extremely fast animals. The millions of quarter horse owners dating back to the 1600s, as well as all the horse breeders and horse racers throughout human history, beg to differ, as they are just as valid an example as any other human would be of such things existing—quite aside from reasons everyone else might care like minor things like “civilizations rose and fell on how well they did this”.
I dunno. Since you didn’t specify, apparently the relevance of the context wasn’t relevant to your point.
* They don’t need to be. Think about how far accurate arrow fire goes, and how long it takes a horse to cover that distance if it’s sprinting at 30MPH+.
I was saying that natural selection is not a human investor and behaves differently, responding to Eliezer saying “not as a metaphor but as simple historical fact, that’s how it played out.” I’m sorry if the exchange was unclear (but hopefully not surprising since it was a line of chat in a fast dialog written in about 3 seconds.) I think that you have to make an analogy because the situation is not obviously structurally identical and there are different analogies you could draw here and it was not clear which one he was making.
I’m sorry I engaged about horse breeding (I think it was mostly a distraction).
My understanding is that the primary military use of horses in Europe for elites was charges into massed infantry, which were not all that much longer than 30 seconds (rarely more than a few minutes). I would expect them to care more about things like carrying capacity and psychology than sprinting speed (as you want to stay in formation, and be able to break thru even if they don’t break formation). Other places focused on horse archers, which involved the horses moving rapidly for much longer periods of time, or skirmishers, where you cared more about the cheetah-like ability to run down someone trying to get away from you.