LLMs by default can easily be “nuance-brained”, eg if you ask Gemini for criticism of a post it can easily generate 10 plausible-enough reasons of why the argument is bad. But recent Claudes seem better at zeroing in on central errors.
Here’s an example of Gemini trying pretty hard and getting close enough to the error but not quite noticing it until I hinted it multiple times.
LLMs by default can easily be “nuance-brained”, eg if you ask Gemini for criticism of a post it can easily generate 10 plausible-enough reasons of why the argument is bad. But recent Claudes seem better at zeroing in on central errors.
Here’s an example of Gemini trying pretty hard and getting close enough to the error but not quite noticing it until I hinted it multiple times.