Background: I was thinking about the scaling-first picture and the bitter lesson and how might interpret it in two different ways:
One is that deep learning is necessary and sufficient for intelligence, there’s no such thing as thinking, no cleverer way to approximate Bayesian inference, no abduction etc.
The other is that deep learning is sufficient for radical capabilities, superhuman intelligence, but doesn’t exclude there being even smarter ways of going about performing cognition.
We have a lot of evidence about the second one, but less about the first one. Evidence for the first one takes the form of “smart humans tried for 75 years, spending ??? person-years on AI research”, so I decided to use Squiggle to estimate the amount of AI research that has happened so far.
Result: 380k to 6.3M person-years, mean 1.5M.
Technique: Used hand-written squiggle code. (I didn’t use AI for this one).
I don’t know whether this will count as a separate submission (I prefer to treat these two models as one submission), but I did one more step on improving the model.
Result: Expected number of AI research years is ~150k to 5.4M years, mean 1.7M.
Technique: I pasted the original model into Claude Sonnet and asked it to suggest improvements. I then gave the original model and some hand-written suggested improvements to Squiggle AI (instructing it to add different growth modes for the AI winters and changing the variance of number of AI researchers to be lower in early years and close to the present).
That’s find, we’ll just review this updated model then.
We’ll only start evaluating models after the cut-off date, so feel free to make edits/updates before then. In general, we’ll only use the most recent version of each submitted model.
Model is here.
Background: I was thinking about the scaling-first picture and the bitter lesson and how might interpret it in two different ways:
One is that deep learning is necessary and sufficient for intelligence, there’s no such thing as thinking, no cleverer way to approximate Bayesian inference, no abduction etc.
The other is that deep learning is sufficient for radical capabilities, superhuman intelligence, but doesn’t exclude there being even smarter ways of going about performing cognition.
We have a lot of evidence about the second one, but less about the first one. Evidence for the first one takes the form of “smart humans tried for 75 years, spending ??? person-years on AI research”, so I decided to use Squiggle to estimate the amount of AI research that has happened so far.
Result: 380k to 6.3M person-years, mean 1.5M.
Technique: Used hand-written squiggle code. (I didn’t use AI for this one).
I don’t know whether this will count as a separate submission (I prefer to treat these two models as one submission), but I did one more step on improving the model.
New Model is here.
Background is the same as above.
Result: Expected number of AI research years is ~150k to 5.4M years, mean 1.7M.
Technique: I pasted the original model into Claude Sonnet and asked it to suggest improvements. I then gave the original model and some hand-written suggested improvements to Squiggle AI (instructing it to add different growth modes for the AI winters and changing the variance of number of AI researchers to be lower in early years and close to the present).
That’s find, we’ll just review this updated model then.
We’ll only start evaluating models after the cut-off date, so feel free to make edits/updates before then. In general, we’ll only use the most recent version of each submitted model.