It feels to me like Sutton is too deep inside the experiential learning theory. When he says there is no evidence for imitation, this only makes sense if you imagine it strictly according to the RL theory he has in mind. He isn’t applying the theory to anything; he is inside the theory and interpreting everything according his understanding of it.
It did feel like there was a lot of talking past one another when Dwarkesh was clearly talking about the superintelligent behaviors everyone is interested in (doing science, math, and engineering) as his model for intelligence, and Sutton is blowing all of this off only to articulate quite late in the game that his perspective is that human infants are his model for intelligence. If this was cleared up early, it would probably have been more productive.
I have always found the concept of a p-zombie kind of silly, but now I feel like we might really have to investigate the question of an approximate i-zombie: if we have a computer than can output anything an intelligent human can, but we stipulate that the computer is not intelligent....and so on and so forth.
On the flip side, it feels kind of like a waste of time. Who would be persuaded by such a thing?
It feels to me like Sutton is too deep inside the experiential learning theory. When he says there is no evidence for imitation, this only makes sense if you imagine it strictly according to the RL theory he has in mind. He isn’t applying the theory to anything; he is inside the theory and interpreting everything according his understanding of it.
It did feel like there was a lot of talking past one another when Dwarkesh was clearly talking about the superintelligent behaviors everyone is interested in (doing science, math, and engineering) as his model for intelligence, and Sutton is blowing all of this off only to articulate quite late in the game that his perspective is that human infants are his model for intelligence. If this was cleared up early, it would probably have been more productive.
I have always found the concept of a p-zombie kind of silly, but now I feel like we might really have to investigate the question of an approximate i-zombie: if we have a computer than can output anything an intelligent human can, but we stipulate that the computer is not intelligent....and so on and so forth.
On the flip side, it feels kind of like a waste of time. Who would be persuaded by such a thing?