Increased likelihood of self-supervised learning algorithms as either a big part or even the entirety of the technical path to AGI—insofar as self-supervised learning is the lion’s share of how the neocortex learning algorithm supposedly works. That’s why I’ve been writing posts like Self-Supervised Learning and AGI safety.
Shorter timelines and faster takeoff, insofar as we think the algorithm is not overwhelmingly complicated
Increased likelihood of “one algorithm to rule them all” over Comprehensive AI Services. This might be on the meta-level of one learning algorithm to rule them all, and we feed it biology books to get a superintelligent biologist, and separately we feed it psychology books and nonfiction TV to get a superintelligent psychological charismatic manipulator, etc. Or it might be on the base level of one trained model to rule them all, and we train it with all 50 million books and 100,000 years of YouTube and anything else we can find. The latter can ultimately be more capable (you understand biology papers better if you also understand statistics, etc. etc.), but on the other hand the former is more likely if there are scaling limits where memory access grinds to a halt after too many gigabytes get loaded into the world-model, or things like that. Either way, it would make it likelier for AGI (or at least the final missing ingredient of AGI) to be developed in one place, i.e. the search-engine model rather than the open-source software model.
My own updates after I wrote that were:
Increased likelihood of self-supervised learning algorithms as either a big part or even the entirety of the technical path to AGI—insofar as self-supervised learning is the lion’s share of how the neocortex learning algorithm supposedly works. That’s why I’ve been writing posts like Self-Supervised Learning and AGI safety.
Shorter timelines and faster takeoff, insofar as we think the algorithm is not overwhelmingly complicated
Increased likelihood of “one algorithm to rule them all” over Comprehensive AI Services. This might be on the meta-level of one learning algorithm to rule them all, and we feed it biology books to get a superintelligent biologist, and separately we feed it psychology books and nonfiction TV to get a superintelligent psychological charismatic manipulator, etc. Or it might be on the base level of one trained model to rule them all, and we train it with all 50 million books and 100,000 years of YouTube and anything else we can find. The latter can ultimately be more capable (you understand biology papers better if you also understand statistics, etc. etc.), but on the other hand the former is more likely if there are scaling limits where memory access grinds to a halt after too many gigabytes get loaded into the world-model, or things like that. Either way, it would make it likelier for AGI (or at least the final missing ingredient of AGI) to be developed in one place, i.e. the search-engine model rather than the open-source software model.