Yes that’s right, thinking of the prompts themselves.
I agree it’s not very surprising given what we know about neural networks, it’s just a way in which LLMs are very much not generalizing in the same way a human would.
Yes that’s right, thinking of the prompts themselves.
I agree it’s not very surprising given what we know about neural networks, it’s just a way in which LLMs are very much not generalizing in the same way a human would.