Back in the GOFAI days, when AI meant A* search, I remember thinking:
Computers are wildly superhuman at explicit (System 2 reasoning) like doing arithmetic or searching through chess moves
Computers are garbage at (System 1 reasoning), like recognizing a picture of a cat
When computers get good at System 1, they will be wildly superhuman at everything
Now transformers appear to be good at System 1 reasoning, but computers aren’t better at humans at everything. Why? I think it comes down to:
Computers’ System 1 is still wildly sub-human at sample efficiency; they’re just billions of times faster than humans
LLM’s work because they can train on an inhuman amount of reading material. When trained on only human amounts of material, they suck.
LLM Agents aren’t very good because they can’t learn on the job. Even dumb humans learn better instincts after a little on-the-job practice. We can just barely improve LLM’s System 1 from its System 2, but only by brute forcing an inhuman number of roll-outs.
Robots suck, because the real world is slow and we don’t have good tricks to train their System 1 by brute force.
We’re in a weird paradigm where computers are billions of times faster than humans, but thousands of times worse at learning from a datum.
Back in the GOFAI days, when AI meant A* search, I remember thinking:
Computers are wildly superhuman at explicit (System 2 reasoning) like doing arithmetic or searching through chess moves
Computers are garbage at (System 1 reasoning), like recognizing a picture of a cat
When computers get good at System 1, they will be wildly superhuman at everything
Now transformers appear to be good at System 1 reasoning, but computers aren’t better at humans at everything. Why?
I think it comes down to:
Computers’ System 1 is still wildly sub-human at sample efficiency; they’re just billions of times faster than humans
LLM’s work because they can train on an inhuman amount of reading material. When trained on only human amounts of material, they suck.
LLM Agents aren’t very good because they can’t learn on the job. Even dumb humans learn better instincts after a little on-the-job practice. We can just barely improve LLM’s System 1 from its System 2, but only by brute forcing an inhuman number of roll-outs.
Robots suck, because the real world is slow and we don’t have good tricks to train their System 1 by brute force.
We’re in a weird paradigm where computers are billions of times faster than humans, but thousands of times worse at learning from a datum.