[Question] Is AI safety research less parallelizable than AI research?

It seems intuitive to me why that would be the case. And I’ve seen Eliezer make the claim a few times. But I can’t find an article describing the idea. Does anyone have a link?