Another component corresponds to a general view that LLMs are trained in a very different way from how humans learn. (Though you could in principle get the same cognition from very different learning processes.)
This does correspond to specific falsifiable predictions.
Despite being pretty confident in “deeply alien” in many respects, it doesn’t seem clear to me whether LLMs will in practice have very different relative capability profiles from humans on larger scale downstream tasks we actually care about. (It currently seems like the answer will be “mostly no” from my perspective.)
In addition to the above, I’d add in some stuff about how blank slate theory seems to be wrong as a matter of human psychology. If evidence comes out tomorrow that actually humans are blank slates to a much greater extent than I realized, so much so that e.g. the difference between human and dog brains is basically just size and training data, I’d be more optimistic that what’s going on inside LLMs isn’t deeply alien.
Most of my view on “deeply alien” is downstream of LLMs being extremely superhuman at literal next token prediction and generally superhuman at having an understanding of random details of webtext.
Another component corresponds to a general view that LLMs are trained in a very different way from how humans learn. (Though you could in principle get the same cognition from very different learning processes.)
This does correspond to specific falsifiable predictions.
Despite being pretty confident in “deeply alien” in many respects, it doesn’t seem clear to me whether LLMs will in practice have very different relative capability profiles from humans on larger scale downstream tasks we actually care about. (It currently seems like the answer will be “mostly no” from my perspective.)
In addition to the above, I’d add in some stuff about how blank slate theory seems to be wrong as a matter of human psychology. If evidence comes out tomorrow that actually humans are blank slates to a much greater extent than I realized, so much so that e.g. the difference between human and dog brains is basically just size and training data, I’d be more optimistic that what’s going on inside LLMs isn’t deeply alien.