We should have been trying hard to retrospectively construct new explanations that would have predicted the observations. Instead we went with the best PREEXISTING explanation that we already had
Yeah, that seems to be a real problem. E. g., it was previously fairly reasonable to believe that “knowing what to do” and “knowing how to do it” were on a spectrum, with any given “what to do” being a “how to do” at big-enough scale; or that there’s no fundamental difference between a competent remixing of existing ideas and genuine innovation. I believed that.
LLMs seem to be significant evidence that those models are flawed. There apparently is some qualitative difference between autonomy and instruction-following; between innovation and remixing. But tons of people still seem to think that “LLMs are stochastic parrots” and “LLMs’ ability to do any kind of reasoning means they are on a continuous path to human-level AGI” are the only two positions, based on their pre-2022 models of agency.
This seems like an overly good-faith / mistake-theoretic explanation of the false dichotomy (which is not to say never applicable). This is a dialectical social dynamic; each side gains credit with its supporters by using the other side’s bad arguments as a foil, conspicuously ignoring the possibility of positions outside the binary.
Yeah, that seems to be a real problem. E. g., it was previously fairly reasonable to believe that “knowing what to do” and “knowing how to do it” were on a spectrum, with any given “what to do” being a “how to do” at big-enough scale; or that there’s no fundamental difference between a competent remixing of existing ideas and genuine innovation. I believed that.
LLMs seem to be significant evidence that those models are flawed. There apparently is some qualitative difference between autonomy and instruction-following; between innovation and remixing. But tons of people still seem to think that “LLMs are stochastic parrots” and “LLMs’ ability to do any kind of reasoning means they are on a continuous path to human-level AGI” are the only two positions, based on their pre-2022 models of agency.
This seems like an overly good-faith / mistake-theoretic explanation of the false dichotomy (which is not to say never applicable). This is a dialectical social dynamic; each side gains credit with its supporters by using the other side’s bad arguments as a foil, conspicuously ignoring the possibility of positions outside the binary.