In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
You keep distinguishing “intelligence” from “heuristics”, but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you’d expect from evolution after all.
So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to “real intelligence” keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What’s your actual criterion for intelligence that would prevent this outcome?