I imagine you could reliably complete a PhD in many fields with a week-long time horizon, as long as you get good enough weekly feedback from a competent advisor. 1: Talk to advisor about what it takes to get a PhD. 2: Divide into a list of <1 week-long tasks. 3) Complete task 1, get feedback, revise list. 4) Either repeat the current task or move on to the new next task, depending on feedback. 5) Loop until complete. 5a) Every ten or so loops, check overall progress to date against the original requirements. Evaluate whether overall pace of progress is acceptable. If not, come up with possible new plans and get advisor feedback
I think it’s nearly impossible to create unexpected new knowledge this way.
As far as not believing the current paradigm could reach AGI, which paradigm do you mean? I don’t think “random variation and rapid iteration” is a fair assessment of the current research process. But even if it were, what should I do with that information? Well, luckily we have a convenient example of what it takes for blind mutations with selection pressure to raise intelligence to human levels: us! I am pretty confident saying that current LLMs would outperform, say, Australopithecus, on any intellectual ability, but not Home sapiens. So that happens in a few million years, let’s say 200k generations of 10-100k individuals each, in which intelligence was one of many, many factors weakly driving selection pressure with at most a small number of variations per generation. I can’t really quantify how much human intelligence and directed effort speed up progress compared to blind chance, but consider that 1) a current biology grad student can do things with genetics in an afternoon that evolution needs thousands of generations and millions of individuals or more to do, and 2) the modern economic growth rate, essentially a sum of the impacts of human insight on human activity, is around 15000x faster than it was in the paleolithic. Naively extrapolated, this outside view would tell me that science and engineering can take us from Australopithecus-level to human-level in about 13 generations (unclear which generation we’re on now). The number of individuals needed per generation is dependent on how much we vary each individual, but plausibly in the single or double digits.
I can’t parse this.
My disagreement with your conclusion from your third objection is that scaling inference time compute increases performance within a generation, but that’s not how the iteration goes between generations. We use reasoning models with more inference time compute to generate better data to train better base models to more efficiently reproduce similar capability levels with less compute to build better reasoning models. So if you build the first superhuman coder and find it’s expensive to run, what’s the most obvious next step in the chain? Follow the same process as we’ve been following for reasoning models and if straight lines on graphs hold, then six months later we’ll plausibly have one that’s a tenth the cost to run. Repeat again for the next six months after that.
You’re right, but creating unexpected new knowledge is not a PhD requirement. I expect it’s pretty rare that a PhD students achieves that level of research.
It wasn’t a great explanation, sorry, and there are definitely some leaps, digressions, and hand-wavy bits. But basically: Even if current AI research were all blind mutation and selection, we already know that that can yield general intelligence from animal-level-intelligence because evolution did it. And we already have various examples of how human research can apply much greater random and non-random mutation, larger individual changes, higher selection pressure in a preferred direction, and more horizontal transfer of traits than evolution can, enabling (very roughly estimated) ~3-5 OOMs greater progress per generation with fewer individuals and shorter generation times.
You’re right, but creating unexpected new knowledge is not a PhD requirement. I expect it’s pretty rare that a PhD students achieves that level of research.
I do weakly expect it to be necessary to reach AGI though. Also, I personally wouldn’t want to do a PhD that didn’t achieve this!
It wasn’t a great explanation, sorry, and there are definitely some leaps, digressions, and hand-wavy bits. But basically: Even if current AI research were all blind mutation and selection, we already know that that can yield general intelligence from animal-level-intelligence because evolution did it. And we already have various examples of how human research can apply much greater random and non-random mutation, larger individual changes, higher selection pressure in a preferred direction, and more horizontal transfer of traits than evolution can, enabling (very roughly estimated) ~3-5 OOMs greater progress per generation with fewer individuals and shorter generation times.
Okay, then I understand the intuition but I think it needs a more rigorous analysis to even make an educated guess either way.
I think it’s nearly impossible to create unexpected new knowledge this way.
I can’t parse this.
You’re probably right about distilling CoT.
You’re right, but creating unexpected new knowledge is not a PhD requirement. I expect it’s pretty rare that a PhD students achieves that level of research.
It wasn’t a great explanation, sorry, and there are definitely some leaps, digressions, and hand-wavy bits. But basically: Even if current AI research were all blind mutation and selection, we already know that that can yield general intelligence from animal-level-intelligence because evolution did it. And we already have various examples of how human research can apply much greater random and non-random mutation, larger individual changes, higher selection pressure in a preferred direction, and more horizontal transfer of traits than evolution can, enabling (very roughly estimated) ~3-5 OOMs greater progress per generation with fewer individuals and shorter generation times.
Saw your edit above, thanks.
I do weakly expect it to be necessary to reach AGI though. Also, I personally wouldn’t want to do a PhD that didn’t achieve this!
Okay, then I understand the intuition but I think it needs a more rigorous analysis to even make an educated guess either way.
No, thank you!
Agreed. It was somewhere around reason #4 I quit my PhD program as soon as I qualified for a masters in passing.