Claude doesn’t get it
In all the interactions with AI, there has been one recurring problem that doesn’t seem to go away: they don’t get it. I don’t know how to explain this in any better language than that. I don’t know how to create a “Get It” benchmark. But whenever I talk to Claude, ChatGPT, Gemini, or any other model about a concept, the longer the interaction lasts, the more I get the sense that it doesn’t really “get it”. In this way, I think AI skeptics are actually pointing to something real when they say they’re not “real intelligence”. A lot of their arguments are poor, but I think there is something truly missing. And it doesn’t seem to change with improvements in other capabilities. GPT 5.3 seems just as bad at “getting it” as GPT 3.5. I don’t think Claude “gets” simpler concepts, but struggles with more complex ones. It doesn’t seem to “get” any concepts at all, simple or otherwise. I don’t know what this means going forward, or if “getting it” is even needed to start an intelligence explosion. I can imagine there are cases where an AI could be a capable AI researcher, without having to really “get it”, just as they can be capable coders. But it could be another obstacle to alignment, where the AI creating a smarter AI will fail to align it, not because it is misaligned, but just that it is too stupid to really grasp its own values.
Bureau of Cognitive Oversight
System Status: Active Surveillance