I agree with this sentiment in response to the question of “will this research impact capabilities more than it will alignment?”, but not in response to the question of “will this research (if implemented) elevate s-risks?”. Partial alignment inflating s-risk is something I am seriously worried about, and prosaic solutions especially could lead to a situation like this.
If your research not influencing s-risks negatively is dependent on it not being implemented, and you think that it your research is good enough to post about, don’t you see the dilemma here?
I agree with this sentiment in response to the question of “will this research impact capabilities more than it will alignment?”, but not in response to the question of “will this research (if implemented) elevate s-risks?”. Partial alignment inflating s-risk is something I am seriously worried about, and prosaic solutions especially could lead to a situation like this.
If your research not influencing s-risks negatively is dependent on it not being implemented, and you think that it your research is good enough to post about, don’t you see the dilemma here?