I think the biggest difference is this will mean more people with a wider range of personality types, socially interacting in a more arms-length/professionalized way, according to the social norms of academia.
Especially in CS, you can be accepted among academics as a legitimate researcher even without a formal degree, but it would require being able and willing to follow these existing social norms.
And in order to welcome and integrate new AI safety researchers from academia, the existing AI safety scene would have to make some spaces to facilitate this style of interaction, rather than the existing informal/intense/low-social-distance style.
I don’t think this is a sufficiently complete way of looking at things. It could make sense when the problem was thought to be “replication crisis via p-hacking” but it turns out things are worse than this.
The research methodology in biology doesn’t necessarily have room for statistical funny business but there are all these cases of influential Science/Nature papers that had fraud via photoshop.
Gino and Ariely’s papers might have been statistically impeccable, the problem is they were just making up data points.
there is fraud in experimental physics and applied sciences too from time to time.
I don’t know much about what opportunities there are for bad research practices in the humanities. The only thing I can think of is citing a source that doesn’t say what is claimed. This seems like a particular risk when history or historical claims are involved, or when a humanist wants to refer to the scientific literature. The spectacular claim that Victorian doctors treated “hysteria” using vibrators turns out to have resulted from something like this.
Outside cases like that, I think the humanities are mostly “safe” like math in that they just need some kind of internal consistency, whether that is presenting a sound argument, or a set of concepts and descriptions that people find to be harmonious or fruitful.