If it was just a ham-fisted way to explain to normies that LLMs that do relatively well on a Turing test aren’t humans, then I agree, trivially.
Isn’t the optimistic point is that LLMs are more similar to humans for the same reasons they are similar to each other modulo some simple transformations?
And this debate seems factually resolvable by figuring out whether ChatGPT is actually nice.
Isn’t the optimistic point is that LLMs are more similar to humans for the same reasons they are similar to each other modulo some simple transformations?
And this debate seems factually resolvable by figuring out whether ChatGPT is actually nice.