Yeah it might just be a lack of training data in 10-second-or-less interactive instructions.
The thing I really wanted to test with this experiment was actually whether ChatGPT could engage with the real world using me as a guinea pig. The 10-second-or-less thing was just the format I used to try to “get at” the phenomenon of engaging with the real world. I’m interested in improving the format to more cleanly get at the phenomenon.
I do currently have the sense that it’s more than just a lack of training data. I have the sense that ChatGPT has learned much less about how the world really works at a causal level than it appears from much of its dialog. Specifically, I have the sense that it has learned how to satisfy idle human curiosity using language, in a way that largely routes around a model of the real world, and especially routes around a model of the dynamics of the real world. That’s my hypothesis—I don’t think this particular experiment has demonstrated it yet.
Yeah it might just be a lack of training data in 10-second-or-less interactive instructions.
The thing I really wanted to test with this experiment was actually whether ChatGPT could engage with the real world using me as a guinea pig. The 10-second-or-less thing was just the format I used to try to “get at” the phenomenon of engaging with the real world. I’m interested in improving the format to more cleanly get at the phenomenon.
I do currently have the sense that it’s more than just a lack of training data. I have the sense that ChatGPT has learned much less about how the world really works at a causal level than it appears from much of its dialog. Specifically, I have the sense that it has learned how to satisfy idle human curiosity using language, in a way that largely routes around a model of the real world, and especially routes around a model of the dynamics of the real world. That’s my hypothesis—I don’t think this particular experiment has demonstrated it yet.