It’s fine to make the mistake of publishing something if the mistake you made was assuming “this is great research”, but if the mistake was “this is safe to publish because I’m new to research”, the consequences can be irreversible.
“Irreversible consequences” is not that huge of a deal. The consequences of writing almost any internet comment are irreversible. I feel like you need to argue for also the expected magnitude of the consequences being large, instead of them just being irreversible.
I agree with this sentiment in response to the question of “will this research impact capabilities more than it will alignment?”, but not in response to the question of “will this research (if implemented) elevate s-risks?”. Partial alignment inflating s-risk is something I am seriously worried about, and prosaic solutions especially could lead to a situation like this.
If your research not influencing s-risks negatively is dependent on it not being implemented, and you think that it your research is good enough to post about, don’t you see the dilemma here?
“Irreversible consequences” is not that huge of a deal. The consequences of writing almost any internet comment are irreversible. I feel like you need to argue for also the expected magnitude of the consequences being large, instead of them just being irreversible.
I agree with this sentiment in response to the question of “will this research impact capabilities more than it will alignment?”, but not in response to the question of “will this research (if implemented) elevate s-risks?”. Partial alignment inflating s-risk is something I am seriously worried about, and prosaic solutions especially could lead to a situation like this.
If your research not influencing s-risks negatively is dependent on it not being implemented, and you think that it your research is good enough to post about, don’t you see the dilemma here?