If people are reading this thread and want to read this argument in more detail: the (excellent) book ‘The Secret of our Success’ by Joseph Henrich (astral codex 10 review/summary here https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/) makes this argument in a very compelling way. There is a lot of support for the idea that the crucial ‘rubicon’ that separates chimps from people is cultural transmission which enables the gradual evolution of strategies over periods longer than an individual lifetime rather than any ‘raw’ problem solving intelligence. In fact according to Heinrich there are many ways in which humans are actually worse than chimps in some measures of raw intelligence: chimps have better working memory and faster reactions for complex tasks in some cases, and they are better than people at finding Nash equilibria which require randomising your strategy. But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of ‘cultural technology’, which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology → smaller stomachs and bigger brains → even more culture → bigger brains even more of an advantage etc.) It’s hard to appreciate how much this kind of thing helps you think; for instance, most people can learn maths but few would have invented arabic numerals by themselves. Similarly, having a large brain by itself is actually not super useful without the cultural superstructure: most people alive today would quickly die if dropped into the ancestral environment without the support of modern culture unless they could learn from hunter-gatherers (see Henrich for many examples of this happening to European explorers!). For instance, i like to think I’m a pretty smart guy but I have no idea how to make e.g bronze or stone tools, and it’s not obvious that my physics degree would help me figure it out! Henrich also makes the case for the importance of this with some slightly chilling examples of cultures that lost their ability to make complex technology (e.g boats) when they fell below a critical population size and became isolated.
It’s interesting to consider the implications for AI: I’m not very sure about this. On the one hand LLMs clearly have superhuman ability to memorise facts, but I’m not sure if this means they can learn new tasks or information particularly easily. On the other it seems likely that LLMs are taking pretty heavy advantage of the ‘culture overhang’ of the internet! I don’t know if it really makes sense to think of their abilities here as strongly superhuman: if you magically had the compute and code to try to train gpt-n in 1950 it’s not obvious you could have got it to do very much, without the internet for it to absorb.
I think there’s some truth to this framing, but I’m not sure that people’s views cluster as neatly as this. In particular, I think there is a ‘how dangerous is existential risk’ axis and a ‘how much should we worry about AI and Power’ axis. I think you rightly identify the ‘booster’ cluster (x-risk fake, AI +power nothing to worry about) and ‘realist’ (x-risk sci-fi, AI + power very concerning) but I think you are missing quite a lot of diversity in people’s positions along other axes that make this arguably even more confusing for people. For example, I would characterise Bengio as being fairly concerned about both x-risk and AI+power, wheras Yudkowsky is extremely concerned about x-risk and fairly relaxed about AI+power.
I also think it’s misleading to group even ‘doomers’ as one cluster because there’s a lot of diversity in the policy asks of people who think x-risk is a real concern, from ‘more research needed’ to ‘shut it all down’. One very important group you are missing are people who are simultaneously quite (publicly) concerned about x-risk, but also quite enthusiastic about pursuing AI development and deployment. This group is important because it includes Sam Altman, Dario Amodei and Demis Hassabis (leadership of the big AI labs), as well as quite a lot people who work developing AI or work on AI safety. You might summarise this position as ‘AI is risky, but if we get it right it will save us all’. As they are often working at big tech, I think these people are mostly fairly un-worried or neutral about AI + power. This group is obviously important because they work directly on the technology, but also because this gives them a loud voice in policy and the public sphere. You might think of this as a ‘how hard is mitigating x-risk’ axis. This is another key source of disagreement : going from public statements alone, I think (say) Sam Altman and Eliezer Yudkowsky agree on the ‘how dangerous’ axis and are both fairly relaxed Silicon Valley libertarians on the ‘AI+power’ axis, and mainly disagree on how difficult is it to solve x-risk. Obviously people’s disagreements on this question have a big impact on their desired policy!