LLMs are interesting in that they are perhaps uniquely minds that are all fiction. That is, I’m not sure that LLMs can actually tell fiction and reality apart (in the same way we can) because I’m not sure they’ve solved the symbol grounding problem, and instead are just so good at manipulating symbols and have such a rich set of training data that having not solved symbol grounding doesn’t really matter because they end up lining up with reality anyway.
To the extent they can tell fiction apart from non-fiction, it’s because we humans make this distinction, but I wouldn’t expect non-fiction to feel different than fiction to an LLM: it’s just another frame used for understanding what tokens to generate, but internally probably doesn’t look much different than fiction to them other than fictional worlds, for some reason, have much less training data and overlap a lot with the non-fiction world.
But this is maybe not that interesting. If everything’s fiction or non-fiction to a mind there’s no need to tell the two things apart.
(None of this is to say that LLMs can’t tell what’s fiction when asked. I’m making a claim about whether an LLM might experience fiction as different from non-fiction the way humans do, and suggesting that they probably don’t.)
One more thought to add here about AI minds:
LLMs are interesting in that they are perhaps uniquely minds that are all fiction. That is, I’m not sure that LLMs can actually tell fiction and reality apart (in the same way we can) because I’m not sure they’ve solved the symbol grounding problem, and instead are just so good at manipulating symbols and have such a rich set of training data that having not solved symbol grounding doesn’t really matter because they end up lining up with reality anyway.
To the extent they can tell fiction apart from non-fiction, it’s because we humans make this distinction, but I wouldn’t expect non-fiction to feel different than fiction to an LLM: it’s just another frame used for understanding what tokens to generate, but internally probably doesn’t look much different than fiction to them other than fictional worlds, for some reason, have much less training data and overlap a lot with the non-fiction world.
But this is maybe not that interesting. If everything’s fiction or non-fiction to a mind there’s no need to tell the two things apart.
(None of this is to say that LLMs can’t tell what’s fiction when asked. I’m making a claim about whether an LLM might experience fiction as different from non-fiction the way humans do, and suggesting that they probably don’t.)