If you do AI research competently you’ll start quickly noticing that a lot of research is dual use, with uncertainty about how much your work would contribute to safety vs capability gain. Thus, the virtue of silence.
This is one downside to be careful of with outreach, but on net I think it’s quite good to have more high-quality analyses of AI risk. The goal should be to get people to take the problem seriously, not to get people to blindly accept the first safety-related research opportunity they can find.
If you do AI research competently you’ll start quickly noticing that a lot of research is dual use, with uncertainty about how much your work would contribute to safety vs capability gain. Thus, the virtue of silence.
This is one downside to be careful of with outreach, but on net I think it’s quite good to have more high-quality analyses of AI risk. The goal should be to get people to take the problem seriously, not to get people to blindly accept the first safety-related research opportunity they can find.