Conversely, if gorillas and chimps were capable of learning complex sign language for communication, we’d expect them to evolve/culturally develop such a language.
We’ve seen an extreme counterexample with octopi, which can be taught some very impressive skills that they don’t pick up in nature because they aren’t sufficiently social to develop them over the course of multiple generations. I think it’s within reason that gorillas could have the ability to learn more complex language than they use, so long as it’s not economical for them to spend time teaching their offspring those complexities as opposed to teaching them other things.
I will say that I’m very skeptical of Koko, though, for other reasons.
This is where I get lost, here. Isn’t “there will be a model with a 10,000x bigger time-horizon” equivalent to “the singularity will have happened”?
Some people argue that the time horizon won’t keep growing at the same pace, and it will plateau, and others argue that it will and we’ll get a technological singularity, but if an LLM can do anything that would take a moderately competent human five years, then that does seem like the end of our current mode of civilization.
In other words, I don’t see a set of possible worlds where LLM time horizons get too long to be marketable to hobbyist engineers and that lack of marketability is still a concern.