I’m not sure the “ML Research Community” is cohesive enough (nor, in fact, well-defined enough) to have very strong norms about this. Further, it’s not clear that there needs to be a “consensus reasoning” even if there is a norm—different members could have different reasons for not bringing it up, and once it’s established, it can be self-propagating: people don’t bring it up because their peers don’t bring it up.
I think if you’re looking for ways to talk to ML researchers, start small, and see what those particular researchers think and how they react to different approaches. If you find some that work, then expand it to more scalable talks to groups of researchers.
I’m not sure the “ML Research Community” is cohesive enough (nor, in fact, well-defined enough) to have very strong norms about this. Further, it’s not clear that there needs to be a “consensus reasoning” even if there is a norm—different members could have different reasons for not bringing it up, and once it’s established, it can be self-propagating: people don’t bring it up because their peers don’t bring it up.
I think if you’re looking for ways to talk to ML researchers, start small, and see what those particular researchers think and how they react to different approaches. If you find some that work, then expand it to more scalable talks to groups of researchers.