I’m taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
A dictionary defines all words circularly, but of course nobody learns all words from a dictionary—the assumption is you’re looking up a small number of words you don’t know.
Humans learn the first few words by seeing how they’re used in relation to objects, and the rest can be derived from there without needing circularity.
However the dictionary provides very tight constraints on what words can mean. Whatever the words “wood”, “is”, “made”, “from”, and “trees” mean, the sentence “wood is made from trees” must be true. The vast majority of all possible meanings fail this. Using only circular definitions, is it possible to constraint words meanings so tightly that there’s only one possible model which fits those constraints?
LLMs seem to provide a resounding yes to that question. Whilst 1st generation LLMs only ever saw text and had no hard coded knowledge, so could only possibly figure out what words meant based on how they’re used in relation to other words, they understood the meaning of words sufficiently well to reason about the physical properties of the objects they represented.