Excellent post, thank you for taking the time to articulate your ideas in a high-quality and detailed way. I think this is a fantastic addition to LessWrong and the Alignment Forum. It offers a novel perspective on AI risk and does so in a curious and truth-seeking manner that’s aimed at genuinely understanding different viewpoints.
Here are a few thoughts on the content of the first post:
I like how it offers a radical perspective on AGI in terms of human intelligence and describes the definition in an intuitive way. This is necessary as increasingly AGI is being redefined as something like “whatever LLM comes out next year”. I definitely found the post illuminating and resulted in a perspective shift because it described an important but neglected vision of how AGI might develop. It feels like the discourse around LLMs is sucking the oxygen out of the room, making it difficult to seriously consider alternative scenarios.
I think the basic idea in the post is that LLMs are built by applying an increasing amount of compute to transformers trained via self-supervised or imitation learning but LLMs will be replaced by a future brain-like paradigm that will need much less compute while being much more effective.
This is a surprising prediction because it seems to run counter to Rich Sutton’s bitter lesson which observes that, historically, general methods that leverage computation (like search and learning) have ultimately proven more effective than those that rely on human-designed cleverness or domain knowledge. The post seems to predict a reversal of this long-standing trend (or I’m just misunderstanding the lesson), where a more complex, insight-driven architecture will win out over simply scaling the current simple ones.
On the other hand, there is an ongoing trend of algorithmic progress and increasing computational efficiency which could smoothly lead to the future described in this post (though the post seems to describe a more discontinuous break between current and future AI paradigms).
If the post’s prediction comes true, then I think we might see a new “biological lesson”: brain-like algorithms will replace deep learning which replaced GOFAI.
This is a surprising prediction because it seems to run counter to Rich Sutton’s bitter lesson which observes that, historically, general methods that leverage computation (like search and learning) have ultimately proven more effective than those that rely on human-designed cleverness or domain knowledge. The post seems to predict a reversal of this long-standing trend (or I’m just misunderstanding the lesson), where a more complex, insight-driven architecture will win out over simply scaling the current simple ones.
No, I’m also talking about “general methods that leverage computation (like search and learning)”. Brain-like AGI would also be an ML algorithm. There’s more than one ML algorithm. The Bitter Lesson doesn’t say that all ML algorithms are equally effective at all tasks, nor that there are no more ML algorithms left to discover, right? If I’m not mistaken, Rich Sutton himself is hard at work trying to develop new, more effective ML algorithms as we speak. (alas)
Excellent post, thank you for taking the time to articulate your ideas in a high-quality and detailed way. I think this is a fantastic addition to LessWrong and the Alignment Forum. It offers a novel perspective on AI risk and does so in a curious and truth-seeking manner that’s aimed at genuinely understanding different viewpoints.
Here are a few thoughts on the content of the first post:
I like how it offers a radical perspective on AGI in terms of human intelligence and describes the definition in an intuitive way. This is necessary as increasingly AGI is being redefined as something like “whatever LLM comes out next year”. I definitely found the post illuminating and resulted in a perspective shift because it described an important but neglected vision of how AGI might develop. It feels like the discourse around LLMs is sucking the oxygen out of the room, making it difficult to seriously consider alternative scenarios.
I think the basic idea in the post is that LLMs are built by applying an increasing amount of compute to transformers trained via self-supervised or imitation learning but LLMs will be replaced by a future brain-like paradigm that will need much less compute while being much more effective.
This is a surprising prediction because it seems to run counter to Rich Sutton’s bitter lesson which observes that, historically, general methods that leverage computation (like search and learning) have ultimately proven more effective than those that rely on human-designed cleverness or domain knowledge. The post seems to predict a reversal of this long-standing trend (or I’m just misunderstanding the lesson), where a more complex, insight-driven architecture will win out over simply scaling the current simple ones.
On the other hand, there is an ongoing trend of algorithmic progress and increasing computational efficiency which could smoothly lead to the future described in this post (though the post seems to describe a more discontinuous break between current and future AI paradigms).
If the post’s prediction comes true, then I think we might see a new “biological lesson”: brain-like algorithms will replace deep learning which replaced GOFAI.
Thanks!
No, I’m also talking about “general methods that leverage computation (like search and learning)”. Brain-like AGI would also be an ML algorithm. There’s more than one ML algorithm. The Bitter Lesson doesn’t say that all ML algorithms are equally effective at all tasks, nor that there are no more ML algorithms left to discover, right? If I’m not mistaken, Rich Sutton himself is hard at work trying to develop new, more effective ML algorithms as we speak. (alas)