That is, the chance of a total stop is clearly higher in this world than in the counterfactual one where any of Demis/Dario/Sam didn’t go into AI capabilities, because a CEO of a leading AI organization saying “yeah I think AI could maybe kill us all” is something that by default would not happen. As I said before, most people in the field of AI don’t take AI risk seriously; this was even more true back when they first entered the field. The default scenario is one where people at NVIDIA and Google Brain and Meta are reassuring the public that AI risk isn’t real.
I have the impression that the big guys started taking AI risk seriously when they saw capabilities that impressed them. So I expect that if Musk, Altman & the rest of the Dreamgrove did not embark in pushing the frontier faster than it was moving otherwise, at the same capability level AI researchers would have taken it seriously the same. Famous AI scientists already knew about the AI risk arguments; where OpenAI made a difference was not in telling them about AI risk, but shoving GPT up their nose.
I think the public would then have been able to side with Distinguished Serious People raising warnings about the dangers of ultra-intellingent machines even if Big Corp claimed otherwise.
I have the impression that the big guys started taking AI risk seriously when they saw capabilities that impressed them. So I expect that if Musk, Altman & the rest of the Dreamgrove did not embark in pushing the frontier faster than it was moving otherwise, at the same capability level AI researchers would have taken it seriously the same. Famous AI scientists already knew about the AI risk arguments; where OpenAI made a difference was not in telling them about AI risk, but shoving GPT up their nose.
I think the public would then have been able to side with Distinguished Serious People raising warnings about the dangers of ultra-intellingent machines even if Big Corp claimed otherwise.