I think the ability for humans to communicate and coordinate is a double edged sword. In particular, it enables the attack vector of dangerous self propagating memes. I expect memetic warfare to play a major role in many of the failure scenarios I can think of. As we’ve seen, even humans are capable of crafting some pretty potent memes, and even defending against human actors is difficult.
I think it’s likely that the relevant reference class here is research bets rather then the “task” of AGI. An extremely successful research bet could be currently underinvested in, but once it shows promise, discontinuous (relative to the bet) amounts of resources will be dumped into scaling it up, even if the overall investment towards the task as a whole remains continuous. In other words, in this case even though investment into AGI may be continuous (though that might not even hold), discontinuity can occur on the level of specific research bets. Historical examples would include imagenet seeing discontinuous improvement with AlexNet despite continuous investment into image recognition to that point. (Also, for what it’s worth, my personal model of AI doom doesn’t depend heavily on discontinuities existing, though they do make things worse.)
I think there exist plausible alternative explanations for why capabilities has been primarily driven by compute. For instance, it may be because ML talent is extremely expensive whereas compute gets half as expensive every 18 months or whatever, that it doesn’t make economic sense to figure out compute efficient AGI. Given the fact that humans need orders of magnitude less data and compute than current models, and that the human genome isn’t that big and is mostly not cognition related, it seems plausible that we already have enough hardware for AGI if we had the textbook from the future, though I have fairly low confidence on this point.
Monolithic agents have the advantage that they’re able to reason about things that involve unlikely connections between extremely disparate fields. I would argue that the current human specialization is at least in part due to constraints about how much information one person can know. It also seems plausible that knowledge can be siloed in ways that make inference cost largely detached from the number of domains the model is competent in. Finally, people have empirically just been really excited about making giant monolithic models. Overall, it seems like there is enough incentive to make monolithic models that it’ll probably be an uphill battle to convince people not to do them.
Generally agree with the regulation point given the caveat. I do want to point out that since substantive regulation often moves very slowly, especially when there are well funded actors trying to prevent AGI development being regulated, even in non-foom scenarios (months-years) they might not move fast enough (example: think about how slowly climate change related regulations get adopted)
Some quick thoughts on these points:
I think the ability for humans to communicate and coordinate is a double edged sword. In particular, it enables the attack vector of dangerous self propagating memes. I expect memetic warfare to play a major role in many of the failure scenarios I can think of. As we’ve seen, even humans are capable of crafting some pretty potent memes, and even defending against human actors is difficult.
I think it’s likely that the relevant reference class here is research bets rather then the “task” of AGI. An extremely successful research bet could be currently underinvested in, but once it shows promise, discontinuous (relative to the bet) amounts of resources will be dumped into scaling it up, even if the overall investment towards the task as a whole remains continuous. In other words, in this case even though investment into AGI may be continuous (though that might not even hold), discontinuity can occur on the level of specific research bets. Historical examples would include imagenet seeing discontinuous improvement with AlexNet despite continuous investment into image recognition to that point. (Also, for what it’s worth, my personal model of AI doom doesn’t depend heavily on discontinuities existing, though they do make things worse.)
I think there exist plausible alternative explanations for why capabilities has been primarily driven by compute. For instance, it may be because ML talent is extremely expensive whereas compute gets half as expensive every 18 months or whatever, that it doesn’t make economic sense to figure out compute efficient AGI. Given the fact that humans need orders of magnitude less data and compute than current models, and that the human genome isn’t that big and is mostly not cognition related, it seems plausible that we already have enough hardware for AGI if we had the textbook from the future, though I have fairly low confidence on this point.
Monolithic agents have the advantage that they’re able to reason about things that involve unlikely connections between extremely disparate fields. I would argue that the current human specialization is at least in part due to constraints about how much information one person can know. It also seems plausible that knowledge can be siloed in ways that make inference cost largely detached from the number of domains the model is competent in. Finally, people have empirically just been really excited about making giant monolithic models. Overall, it seems like there is enough incentive to make monolithic models that it’ll probably be an uphill battle to convince people not to do them.
Generally agree with the regulation point given the caveat. I do want to point out that since substantive regulation often moves very slowly, especially when there are well funded actors trying to prevent AGI development being regulated, even in non-foom scenarios (months-years) they might not move fast enough (example: think about how slowly climate change related regulations get adopted)