It is not for lack of regulatory ideas that the world has not banned gain-of-function research.
It is not for lack of demonstration of scary gain-of-function capabilities that the world has not banned gain-of-function research.
What exactly is the model by which some AI organization demonstrating AI capabilities will lead to world governments jointly preventing scary AI from being built, in a world which does not actually ban gain-of-function research?
Given how the past year has gone, I should probably lose at least some Bayes points for this. Not necessarily very many Bayes points; notably there is still not a ban on AI capabilities research, and it doesn’t look like there will be. But the world has at least moved marginally closer to world governments actually stopping AI capabilities work, over the past year.
When this post came out, I left a comment saying:
Given how the past year has gone, I should probably lose at least some Bayes points for this. Not necessarily very many Bayes points; notably there is still not a ban on AI capabilities research, and it doesn’t look like there will be. But the world has at least moved marginally closer to world governments actually stopping AI capabilities work, over the past year.