But this isn’t what’s happening, in my opinion. On the contrary: it’s the LLM believers who are sailing against the winds of evidence.
You say that, but… what’s the evidence?
What specific tasks are they failing to generalize on? What’s a prompt they can’t solve?
If a friend is freaking out over a baseline model, how do I help ground them?
What about a smart person claiming they’ve got a series of prompts that produces novel behavior?
What are the tests they can use to prove for themselves that this really is just confirmation bias? Who do they talk to if they really have built something that can get past the basic 101 testing?
You say that, but… what’s the evidence?
What specific tasks are they failing to generalize on? What’s a prompt they can’t solve?
If a friend is freaking out over a baseline model, how do I help ground them?
What about a smart person claiming they’ve got a series of prompts that produces novel behavior?
What are the tests they can use to prove for themselves that this really is just confirmation bias? Who do they talk to if they really have built something that can get past the basic 101 testing?