“It” here refers to progress from human ingenuity, so I’m hesitant to put any limits whatsoever on what it will produce and how fast
There’s a contingent fact which is how many people are doing how much great original natural philosophy about intelligence and machine learning. If I thought the influx of people were directed at that, rather than at other stuff, I’d think AGI was coming sooner.
Humans are likely to accomplish such a feat in decades or centuries at the most,
As I said in the post, I agree with this, but I think it requires a bunch of work that hasn’t been done yet, some of it difficult / requires insights.
I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.
Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.
So even if LLMs and other current DL paradigm methods plateau, I think it’s plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.
In terms of responses to arguments in the post: it’s not that there are no blockers, or that there’s just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It’s just that explicit philosophy isn’t the only way of filling in the missing pieces, however large and many they may be.
Don’t you think that once scaling hits the wall (assuming it does) the influx of people will be redirected towards natural philosophy of Intelligence and ML?
Yep! To some extent. That’s what I meant by “It also seems like people are distracted now.”, above. I have a denser probability on AGI in 2037 than on AGI in 2027, for that reason.
Natural philosophy is hard, and somewhat has serial dependencies, and IMO it’s unclear how close we are. (That uncertainty includes “plausibly we’re very very close, just another insight about how to tie things together will open the floodgates”.) Also there’s other stuff for people to do. They can just quiesce into bullshit jobs; they can work on harvesting stuff; they can leave the field; they can work on incremental progress.
There’s a contingent fact which is how many people are doing how much great original natural philosophy about intelligence and machine learning. If I thought the influx of people were directed at that, rather than at other stuff, I’d think AGI was coming sooner.
As I said in the post, I agree with this, but I think it requires a bunch of work that hasn’t been done yet, some of it difficult / requires insights.
I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.
Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.
So even if LLMs and other current DL paradigm methods plateau, I think it’s plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.
In terms of responses to arguments in the post: it’s not that there are no blockers, or that there’s just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It’s just that explicit philosophy isn’t the only way of filling in the missing pieces, however large and many they may be.
Related—“There are always many ways through the garden of forking paths, and something needs only one path to happen.”
Don’t you think that once scaling hits the wall (assuming it does) the influx of people will be redirected towards natural philosophy of Intelligence and ML?
Yep! To some extent. That’s what I meant by “It also seems like people are distracted now.”, above. I have a denser probability on AGI in 2037 than on AGI in 2027, for that reason.
Natural philosophy is hard, and somewhat has serial dependencies, and IMO it’s unclear how close we are. (That uncertainty includes “plausibly we’re very very close, just another insight about how to tie things together will open the floodgates”.) Also there’s other stuff for people to do. They can just quiesce into bullshit jobs; they can work on harvesting stuff; they can leave the field; they can work on incremental progress.