I think he just objected to the phrasing. I do think “set up a system where people can be banned by others whom Said does not instruct on who to ban” is a stretch for “Said bans people from DSL.”
I have generally found Said to mean the things he says quite literally and to expect others to do so as well. It’s painful to read a conversation where one person keeps assigning subtext to another who quite clearly never intended to put it there.
I think what is actually happening is “yes, all the benchmarks are inadequate.” In humans, those benchmarks correlate to a particular kind of ability we may call ‘able to navigate society and to, in some field, improve it.’ Top of the line AIs still routinely deletes people’s home dirs and cannot run a profitable business even if extremely handheld. AIs have only really started this year to convincingly contribute to software projects outside of toys. There are still many software projects that could never be created by even a team of AIs all running in pro mode at 100x the cost of living of a human. Benchmarks are fundamentally an attempt to measure a known cognitive manifold by sampling it at points. What we have learnt in these years is that it is possible to build an intelligence that has a much more fragmented cognitive manifold than humans do.
This is what I think is happening. Humans use maybe a dozen strong generalist strategies with diverse modalities that are evaluated slowly and then cached. LLMs use one—backprop on token prediction—that is general enough to generate hundreds of more-or-less-shared subskills. But that means that the main mechanism that gives these LLMs a skill in the first place is not evaluated over half its lifetime. As a consequence of this, LLMs are monkey paws: they can become good at any skill that can be measured, and in doing so they demonstrate to you that the skill that you actually wanted- the immeasurable one that you hoped the measurable one would provide evidence towards—actually did not benefit nearly as much as you hoped.
It’s strange how things worked out. Decades of goalshifting, and we have finally created a general, weakly superhuman intelligence that is specialized towards hitting marked goals and nothing else.