[Question] Is AI safety research less parallelizable than AI research?

It seems in­tu­itive to me why that would be the case. And I’ve seen Eliezer make the claim a few times. But I can’t find an ar­ti­cle de­scribing the idea. Does any­one have a link?

No comments.