The current race towards agentic AGI in particular is much more like 50% cultural/path-dependent than 5% cultural/path-dependent and 95% obvious. I think the decisions of the major labs are significantly influenced by particular beliefs about AGI & timelines; while these are likely (at least directionally) true beliefs, it’s not at all clear to me that the industry would’ve been this “situationally aware” in alternative timelines.
This is probably cruxy here, as I viewed the race to replace humans with AI as much less path-dependent ever since I realized the giant scale-up of compute happened, as well as the bitter lesson occuring, combined with scale-up of pure self-supervised learning as hitting slowdowns, and more generally subscribe to a view in which research is less path-dependent than people think.
More generally, I’m very skeptical of changing the ultimate paradigm for AGI into something that’s safer but less competitive, and I believe your initial proposals relied on changing the AI paradigm to significantly complement humans using local knowledge, rather than straight-up automate them, but I view automation as unlocking >99% of the value or more due to the long tail of cases that occur IRL, so this is a big amount of value to give up.
(More fundamentally, there’s also the question of how high you think human/AI complementarity at cognitive skills to be—right now it’s surprisingly high IMO)
I also suspect this is a lesser crux, and while I do think complementarities exist, I’d say that the human+AI complement is basically always much less valuable than an AI straight up replacing the human, if replacing the human actually worked.
This is probably cruxy here, as I viewed the race to replace humans with AI as much less path-dependent ever since I realized the giant scale-up of compute happened, as well as the bitter lesson occuring, combined with scale-up of pure self-supervised learning as hitting slowdowns, and more generally subscribe to a view in which research is less path-dependent than people think.
More generally, I’m very skeptical of changing the ultimate paradigm for AGI into something that’s safer but less competitive, and I believe your initial proposals relied on changing the AI paradigm to significantly complement humans using local knowledge, rather than straight-up automate them, but I view automation as unlocking >99% of the value or more due to the long tail of cases that occur IRL, so this is a big amount of value to give up.
I also suspect this is a lesser crux, and while I do think complementarities exist, I’d say that the human+AI complement is basically always much less valuable than an AI straight up replacing the human, if replacing the human actually worked.