I’d like to nitpick a bit with the E, A, R example. In particular, a Rosetta Stone R is far from an equivalence between the features of A and B. It’s a tiny subset of features of the world expressed in both Linear A and in English. The trick is that this tiny hook between languages can be bootstrapped into an imperfect relationship between the symbology of Linear A and the symbology of English up to the limits of indeterminacy. In a sufficiently large corpus of Linear A, a bootstrapping R isn’t even needed. Nearly all of the information for translation is contained within E and the fact that E describes a shared world. In some sense, it’s self-grounded in a similarly messy and largely self-referential way that any agent’s command of any language is grounded.
With regard to GPT-n, I don’t think the hurdle is groundedness. Given a sufficiently vast corpus of language, GPT-n will achieve a level of groundedness where it understands language at a human level but lacks the ability to make intelligent extrapolations from that understanding (e.g. invent general relativity), which is rather a different problem.
To properly address “GPT-n series of algorithms will not reach super-human levels of capability”, I need to understand more clearly what you mean by “capability”. Capability of what in particular? Can you describe a test that could verify superhuman capability? Is this different than a Turing test?
With regard to GPT-n, I don’t think the hurdle is groundedness. Given a sufficiently vast corpus of language, GPT-n will achieve a level of groundedness where it understands language at a human level but lacks the ability to make intelligent extrapolations from that understanding (e.g. invent general relativity), which is rather a different problem.
The claim in the article is that grounding is required for extrapolation, so these two problems are not in fact unrelated. You might compare e.g. the case of a student who has memorized by rote a number of crucial formulas in calculus, but cannot derive those formulas from scratch if asked (and by extension obviously cannot conceive of or prove novel theorems either); this suggests an insufficient level of understanding of the fundamental mathematical underpinnings of calculus, which (if I understood Stuart’s post correctly) is a form of “ungroundedness”.
I’d like to nitpick a bit with the E, A, R example. In particular, a Rosetta Stone R is far from an equivalence between the features of A and B. It’s a tiny subset of features of the world expressed in both Linear A and in English. The trick is that this tiny hook between languages can be bootstrapped into an imperfect relationship between the symbology of Linear A and the symbology of English up to the limits of indeterminacy. In a sufficiently large corpus of Linear A, a bootstrapping R isn’t even needed. Nearly all of the information for translation is contained within E and the fact that E describes a shared world. In some sense, it’s self-grounded in a similarly messy and largely self-referential way that any agent’s command of any language is grounded.
With regard to GPT-n, I don’t think the hurdle is groundedness. Given a sufficiently vast corpus of language, GPT-n will achieve a level of groundedness where it understands language at a human level but lacks the ability to make intelligent extrapolations from that understanding (e.g. invent general relativity), which is rather a different problem.
To properly address “GPT-n series of algorithms will not reach super-human levels of capability”, I need to understand more clearly what you mean by “capability”. Capability of what in particular? Can you describe a test that could verify superhuman capability? Is this different than a Turing test?
The claim in the article is that grounding is required for extrapolation, so these two problems are not in fact unrelated. You might compare e.g. the case of a student who has memorized by rote a number of crucial formulas in calculus, but cannot derive those formulas from scratch if asked (and by extension obviously cannot conceive of or prove novel theorems either); this suggests an insufficient level of understanding of the fundamental mathematical underpinnings of calculus, which (if I understood Stuart’s post correctly) is a form of “ungroundedness”.