My thoughts on the “Humans vs. chimps” section (which I found confusing/unconvincing):
Chimpanzees have brains only ~3x smaller than humans, but are much worse at making technology (or doing science, or accumulating culture…). If evolution were selecting primarily or in large part for technological aptitude, then the difference between chimps and humans would suggest that tripling compute and doing a tiny bit of additional fine-tuning can radically expand power, undermining the continuous change story.
But chimp evolution is not primarily selecting for making and using technology, for doing science, or for facilitating cultural accumulation.
For me the main takeaway of the human vs. chimp story to be information about the structure of mind space, namely that there are discontinuities in terms of real world consequences.
Evolution changes continuously on the narrow metric it is optimizing, but can change extremely rapidly on other metrics. For human technology, features of the technology that aren’t being optimized change rapidly all the time. When humans build AI, they will be optimizing for usefulness, and so progress in usefulness is much more likely to be linear.
I don’t see how “humans are optimizing AI systems for usefulness” undermines the point about mind space—if there are discontinuities in capabilities / resulting consequences, I don’t see how optimizing for capabilities / consequences makes things any more continuous.
Also, there is a difference between “usefulness” and (say) “capability of causing human extinction”, just as there is a difference between “inclusive genetic fitness” and “intelligence”. Cf. it being hard to get LLMs do what you want them to do, and the difference between the publicity* of ChatGPT and other GPT-3 models is more about usability and UI instead of the underlying capabilities.
*Publicity is a different thing from usefulness. Lacking a more narrow definition of usefulness, I still would argue that to many people ChatGPT is more useful than other GPT models.
My thoughts on the “Humans vs. chimps” section (which I found confusing/unconvincing):
For me the main takeaway of the human vs. chimp story to be information about the structure of mind space, namely that there are discontinuities in terms of real world consequences.
I don’t see how “humans are optimizing AI systems for usefulness” undermines the point about mind space—if there are discontinuities in capabilities / resulting consequences, I don’t see how optimizing for capabilities / consequences makes things any more continuous.
Also, there is a difference between “usefulness” and (say) “capability of causing human extinction”, just as there is a difference between “inclusive genetic fitness” and “intelligence”. Cf. it being hard to get LLMs do what you want them to do, and the difference between the publicity* of ChatGPT and other GPT-3 models is more about usability and UI instead of the underlying capabilities.
*Publicity is a different thing from usefulness. Lacking a more narrow definition of usefulness, I still would argue that to many people ChatGPT is more useful than other GPT models.