GPT-4 is bad at strategic thinking

GPT-4 is known to pretty good at chess (see I played chess against ChatGPT-4 and lost! for one example). However, GPT-4 does not seem to be very good at strategic reasoning in general (it only really can do it if there is a greedy search algorithm).

I tried Hex and Connect4, it failed at both despite being able to explain the rules and even display the board with ASCII art. I was wondering if maybe it just has bad spatial reasoning, so I tried puzzles in natural language based on logical constraints. It failed these as well unless they were quite simple.

I even made a variant of chess up on the spot where the goal is to get any piece to the bank rank instead of capturing the King. It didn’t stop me from “sacking” my queen by moving it to the bank rank as soon as their was a gap. So if it has an internal model of chess, it didn’t figure out how to apply it to new objectives.

So I think GPT-4 must’ve learned a rudimentary chess engine; it is not applying general strategic reasoning to chess.

This doesn’t necessarily mean GPT-4 can’t be agentic, but it does suggest it is either a narrow one or a dumb one (or it’s hiding its abilities).