I’ve asked the mods to enable inline reacts for comments on this post.
If the feature is enabled (Now enabled!) You can react to each of the takes individually by highlighting them in the list below.
Computational complexity theory does not say anything practical about the bounds on AI (or human) capabilities.
From here on, capabilities research won’t fizzle out (no more AI winters).
Scaling laws and their implications, e.g. Chinchilla, are facts about particular architectures and training algorithms.
“Human-level” intelligence is actually a pretty wide spectrum.
Goal-direction and abstract reasoning (at ordinary human levels) are very useful for next-token prediction.
LLMs are evidence that abstract reasoning ability emerges as a side effect of solving any sufficiently hard and general problem with enough effort.
The difficulty required to think up algorithms needed to learn and do abstract reasoning is upper bounded by evolution.
The amount of compute required to run and develop such algorithms can be approximated by comparing current AI systems to components of the brain.
One of the purposes of studying some problems in decision theory, embedded agency, and agent foundations is to be able to recognize and avoid having the first AGI systems solve those problems.
Even at human level, 99% honesty for AI isn’t good enough.
I’ve asked the mods to enable inline reacts for comments on this post.
If the feature is enabled(Now enabled!) You can react to each of the takes individually by highlighting them in the list below.Computational complexity theory does not say anything practical about the bounds on AI (or human) capabilities.
From here on, capabilities research won’t fizzle out (no more AI winters).
Scaling laws and their implications, e.g. Chinchilla, are facts about particular architectures and training algorithms.
“Human-level” intelligence is actually a pretty wide spectrum.
Goal-direction and abstract reasoning (at ordinary human levels) are very useful for next-token prediction.
LLMs are evidence that abstract reasoning ability emerges as a side effect of solving any sufficiently hard and general problem with enough effort.
The difficulty required to think up algorithms needed to learn and do abstract reasoning is upper bounded by evolution.
The amount of compute required to run and develop such algorithms can be approximated by comparing current AI systems to components of the brain.
One of the purposes of studying some problems in decision theory, embedded agency, and agent foundations is to be able to recognize and avoid having the first AGI systems solve those problems.
Even at human level, 99% honesty for AI isn’t good enough.