I think the amount of money-and-talent invested into the semiconductor industry has been much more stable than in AI though, no? Not constant, but growing steadily with the population/economy/etc. In addition, Moore’s law being so well-known potentially makes it a self-fulfilling prophecy, with the industry making it a target to aim for.
Also, have you tracked the previous discussion on Old Scott Alexanderand LessWrong about generally “mysterious straight lines” being a surprisingly common phenomenon in economics. i.e. On an old AI post Oli noted:
This is one of my major go-to examples of this really weird linear phenomenon:
150 years of a completely straight line! There were two world wars in there, the development of artificial fertilizer, the broad industrialization of society, the invention of the car. And all throughout the line just carries one, with no significant perturbations.
This doesn’t mean we should automatically take new proposed Straight Line Phenomena at face value, I don’t actually know if this is more like “pretty common actually” or “there are a few notable times it was true that are drawing undue attention.” But I’m at least not like “this is a never-before-seen anomaly”)
Kurzweil (and gwern in a cousin comment) both think that “effort will be allocated efficiently over time” and for Kurzweil this explained much much more than just Moore’s Law.
Ray’s charts from “the olden days” (the nineties and aughties and so on) were normalized around what “1000 (inflation adjusted) dollars spent on mechanical computing” could buy… and this let him put vacuum tubes and even steam-powered gear-based computers on a single chart… and it still worked.
The 2020s have basically always been very likely to be crazy. Based on my familiarity with old ML/AI systems and standards, the term “AGI” as it was used a decade ago was already reached in the past. Claude is already smarter than most humans, but (from the perspective of what smart, numerate, and reasonable people predicted in 2009) he is (arguably) overbudget and behind schedule.
Hm, that’s a very good point.
I think the amount of money-and-talent invested into the semiconductor industry has been much more stable than in AI though, no? Not constant, but growing steadily with the population/economy/etc. In addition, Moore’s law being so well-known potentially makes it a self-fulfilling prophecy, with the industry making it a target to aim for.
Also, have you tracked the previous discussion on Old Scott Alexander and LessWrong about generally “mysterious straight lines” being a surprisingly common phenomenon in economics. i.e. On an old AI post Oli noted:
This doesn’t mean we should automatically take new proposed Straight Line Phenomena at face value, I don’t actually know if this is more like “pretty common actually” or “there are a few notable times it was true that are drawing undue attention.” But I’m at least not like “this is a never-before-seen anomaly”)
That surprisingly straight line reminds me of what happens when you use noise to regularise an otherwise decidedly non linear function: https://www.imaginary.org/snapshot/randomness-is-natural-an-introduction-to-regularisation-by-noise
Kurzweil (and gwern in a cousin comment) both think that “effort will be allocated efficiently over time” and for Kurzweil this explained much much more than just Moore’s Law.
Ray’s charts from “the olden days” (the nineties and aughties and so on) were normalized around what “1000 (inflation adjusted) dollars spent on mechanical computing” could buy… and this let him put vacuum tubes and even steam-powered gear-based computers on a single chart… and it still worked.
The 2020s have basically always been very likely to be crazy. Based on my familiarity with old ML/AI systems and standards, the term “AGI” as it was used a decade ago was already reached in the past. Claude is already smarter than most humans, but (from the perspective of what smart, numerate, and reasonable people predicted in 2009) he is (arguably) overbudget and behind schedule.