Another way to get at the same point, I think, is—Are there things that we (contemporary humans) will never understand (from a Quora post)?
I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today—or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I’m not sure it’s a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.
As to the energy issue, I don’t see any reason to think that such super-human cognition systems necessarily requires more energy—though they may at first.
Depends on the criteria we place on “understanding.” Certainly an AI may act in a way that invite us to attribute ‘common sense’ to it in some situations, without solving the ’whole problem.” Watson would seem to be a case in point—apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain.
Even if we take “language understanding” in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of phenomenal experience: does having such an understanding entail the experience of such understanding—self-consciousness, and are we concerned with that?
I think that “true” language understanding is indeed “AI complete”, but in a rather trivial sense that to match a competent human speaker one needs to have most of the ancillary cognitive capacities of a competent human.