John Pressman: So what stands out to me about [Kimi K2]. Is that it doesn’t do the thing language models normally do where they kind of avoid detail? Like, a human will write about things using specific names and places.
And if you pay close attention to LLM writing they usually avoid this. It’s one of the easiest ways to spot LLM writing. This model emphatically *does not* have this problem. It writes about people and events with the rich detail characteristic of histories and memoirs. Or fictional settings with good worldbuilding.
That comment is spot on. The model is so incredibly specfic as if it wants to cram more facts than words into the text, which is great for text I find engaging to read.
Unfortunately that also means it’s extremely happy being hella confabulating all the time, especially when pushed to the boundary of its knowledge. Just (half-)made up a ton of facts—is that what it’s like to work with o3‽ Just never being able to trust the model of not making things up? I might’ve been spoiled by the Claudes, then…
That comment is spot on. The model is so incredibly specfic as if it wants to cram more facts than words into the text, which is great for text I find engaging to read.
Unfortunately that also means it’s extremely happy being hella confabulating all the time, especially when pushed to the boundary of its knowledge. Just (half-)made up a ton of facts—is that what it’s like to work with o3‽ Just never being able to trust the model of not making things up? I might’ve been spoiled by the Claudes, then…