My speculation: It’s tribal arguments as soldiers mentality. Saying something bad (peoples mental health is harmed) about something from “our team” (people promoting awareness of AI x-risk) is viewed negatively. Ideally people on lesswrong know not to treat arguments as soldiers and understand that situations can be multi-faceted, but I’m not sure I believe that is the case.
Two more steel man speculations:
Currently promoting x-risk is very important and people focused on AI Alignment are an extreme minority, so even though it is true that people learning that the the future is in threat causes distress, it is important to let people know. But I note that this perspective shouldn’t limit discussion of how to promote awareness of x-risk while also promoting good emotional well being.
Fwiw, I would love for people promoting AI x-risk awareness to be aware and careful about how the message affects people, and promote resources for peoples well being, but this seems comparably low priory. Currently in computer science there is no obligation for people to swear an oath of ethics like doctors and engineers do, and papers are only obligated to speculate on the benefits of the contents of the paper, not the ethical considerations. It seems like the mental health problems computer science in general are causing, especially social media and AI chatbots, are worse than people hearing that AI is a threat.
So even if I disagree with you, I do value what your saying and think it deserves an explanation, not just downvoting.
My speculation: It’s tribal arguments as soldiers mentality. Saying something bad (peoples mental health is harmed) about something from “our team” (people promoting awareness of AI x-risk) is viewed negatively. Ideally people on lesswrong know not to treat arguments as soldiers and understand that situations can be multi-faceted, but I’m not sure I believe that is the case.
Two more steel man speculations:
Currently promoting x-risk is very important and people focused on AI Alignment are an extreme minority, so even though it is true that people learning that the the future is in threat causes distress, it is important to let people know. But I note that this perspective shouldn’t limit discussion of how to promote awareness of x-risk while also promoting good emotional well being.
So, my second steel man: You didn’t include anything productive, such as pointing to Mental Health and the Alignment Problem: A Compilation of Resources.
Fwiw, I would love for people promoting AI x-risk awareness to be aware and careful about how the message affects people, and promote resources for peoples well being, but this seems comparably low priory. Currently in computer science there is no obligation for people to swear an oath of ethics like doctors and engineers do, and papers are only obligated to speculate on the benefits of the contents of the paper, not the ethical considerations. It seems like the mental health problems computer science in general are causing, especially social media and AI chatbots, are worse than people hearing that AI is a threat.
So even if I disagree with you, I do value what your saying and think it deserves an explanation, not just downvoting.