I am both experienced enough in text-based RP and have interacted with Character.AI enough to confidently assert that LLMs are not categorically different in their output from a poor-memory RPer, despite sometimes clearly different underlying patterns.
“Potentially get as good as humans” I of course think in general, as I think we’re by default all dead within 100 years to AGI. If you mean actual current LLMs, I’m pretty sure no they cannot. See https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce and https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense
I would point you for example to low-sample-complexity learning that humans sometimes do, and claim that LLMs don’t do this in the relevant sense and that this is necessary for getting good. See also this thread: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce?commentId=dqbLkADbJQJi6bFtN
I am both experienced enough in text-based RP and have interacted with Character.AI enough to confidently assert that LLMs are not categorically different in their output from a poor-memory RPer, despite sometimes clearly different underlying patterns.