I don’t see how this is relevant. I’m asking for examples of the OP’s failed replications of safety papers which are popular on lesswrong. I am not disputing that ML papers often fail to replicate in general.
I don’t understand why the OP would float the idea of founding an org to extend their work attempting replications, based on the claim that replication failures are common here, without giving any examples (preferably examples that they personally found). This post is (to me) indistinguishable from noise.
I don’t see how this is relevant. I’m asking for examples of the OP’s failed replications of safety papers which are popular on lesswrong. I am not disputing that ML papers often fail to replicate in general.
I don’t understand why the OP would float the idea of founding an org to extend their work attempting replications, based on the claim that replication failures are common here, without giving any examples (preferably examples that they personally found). This post is (to me) indistinguishable from noise.