I’m still not very sure how to interpret downvotes in the absence of disagree votes… Do people really mean it has negative value to raise discussion about having some tighter quality control on publishing high-profile AI takeover scenarios, or is it just disagreement about not having robust quality control being a problem?
To expand a little more, in my field of climate science research, people have been wrestling with how to produce useful scenarios about the future for many years. There are a couple of instances where respectable groups have produced disaster scenarios, but they have generally had a lot more review than the work here as I understand it. There would be a lot of concerns about people publishing such work without external checking by people in the wider community.
It wouldn’t necessarily have to be review like in a journal article—having different experts provide commentary that is published alongside a report could also be very helpful for giving those less informed a better idea of where there are crux disagreements etc.
I didn’t downvote you, but I have now disagree voted you. I have also upvoted you. Here are my thoughts: --Thanks for your comment. You raise two issues: The effect on people’s mental health, and the legibility of the rigor of the work. You claim that making the rigor more legible e.g. by having skeptics comment approvingly on the piece would help with the mental health problem. --I sympathize with the concerns about people’s mental health. I don’t think it should be my top priority—my top priority should be to figure out what the future might look like, as best as I can, and communicate that clearly—but insofar as I can help people’s mental health without compromising on that, I’m interested. --I don’t see how getting skeptics to comment approvingly would help people’s mental health though. Wouldn’t it make it even worse, because people wouldn’t be able to dismiss AI 2027 so easily? --You may be interested to know that Gary Marcus, a skeptic of short AI timelines, read a draft and liked it, and so did Dean W. Ball, who is more in the e/acc camp and at least until recently had longer AI timelines than me. More generally we had comments/feedback from about a hundred people, from the companies, nonprofits, academia, government, etc. prior to publication. --I don’t claim that AI 2027 has been more polished & reviewed than any climate change scenario. I’m not familiar with the literature on climate change scenarios. I am only familiar with the literature on AI scenarios, and as far as I can tell, AI 2027 is head and shoulders above everything else in terms of rigor/quality. --In the future, we’ll try to do better still. The idea of getting skeptics to critique it and publish their critiques alongside is a good one and we might do some version of that in the future. --Do you have any suggestions for how we could help people react to our work in a healthy rather than unhealthy way? We already did the Slowdown ending, instead of simply having the Race ending, which hopefully helped yes?
My speculation: It’s tribal arguments as soldiers mentality. Saying something bad (peoples mental health is harmed) about something from “our team” (people promoting awareness of AI x-risk) is viewed negatively. Ideally people on lesswrong know not to treat arguments as soldiers and understand that situations can be multi-faceted, but I’m not sure I believe that is the case.
Two more steel man speculations:
Currently promoting x-risk is very important and people focused on AI Alignment are an extreme minority, so even though it is true that people learning that the the future is in threat causes distress, it is important to let people know. But I note that this perspective shouldn’t limit discussion of how to promote awareness of x-risk while also promoting good emotional well being.
Fwiw, I would love for people promoting AI x-risk awareness to be aware and careful about how the message affects people, and promote resources for peoples well being, but this seems comparably low priory. Currently in computer science there is no obligation for people to swear an oath of ethics like doctors and engineers do, and papers are only obligated to speculate on the benefits of the contents of the paper, not the ethical considerations. It seems like the mental health problems computer science in general are causing, especially social media and AI chatbots, are worse than people hearing that AI is a threat.
So even if I disagree with you, I do value what your saying and think it deserves an explanation, not just downvoting.
I’m still not very sure how to interpret downvotes in the absence of disagree votes… Do people really mean it has negative value to raise discussion about having some tighter quality control on publishing high-profile AI takeover scenarios, or is it just disagreement about not having robust quality control being a problem?
To expand a little more, in my field of climate science research, people have been wrestling with how to produce useful scenarios about the future for many years. There are a couple of instances where respectable groups have produced disaster scenarios, but they have generally had a lot more review than the work here as I understand it. There would be a lot of concerns about people publishing such work without external checking by people in the wider community.
It wouldn’t necessarily have to be review like in a journal article—having different experts provide commentary that is published alongside a report could also be very helpful for giving those less informed a better idea of where there are crux disagreements etc.
I didn’t downvote you, but I have now disagree voted you. I have also upvoted you. Here are my thoughts:
--Thanks for your comment. You raise two issues: The effect on people’s mental health, and the legibility of the rigor of the work. You claim that making the rigor more legible e.g. by having skeptics comment approvingly on the piece would help with the mental health problem.
--I sympathize with the concerns about people’s mental health. I don’t think it should be my top priority—my top priority should be to figure out what the future might look like, as best as I can, and communicate that clearly—but insofar as I can help people’s mental health without compromising on that, I’m interested.
--I don’t see how getting skeptics to comment approvingly would help people’s mental health though. Wouldn’t it make it even worse, because people wouldn’t be able to dismiss AI 2027 so easily?
--You may be interested to know that Gary Marcus, a skeptic of short AI timelines, read a draft and liked it, and so did Dean W. Ball, who is more in the e/acc camp and at least until recently had longer AI timelines than me. More generally we had comments/feedback from about a hundred people, from the companies, nonprofits, academia, government, etc. prior to publication.
--I don’t claim that AI 2027 has been more polished & reviewed than any climate change scenario. I’m not familiar with the literature on climate change scenarios. I am only familiar with the literature on AI scenarios, and as far as I can tell, AI 2027 is head and shoulders above everything else in terms of rigor/quality.
--In the future, we’ll try to do better still. The idea of getting skeptics to critique it and publish their critiques alongside is a good one and we might do some version of that in the future.
--Do you have any suggestions for how we could help people react to our work in a healthy rather than unhealthy way? We already did the Slowdown ending, instead of simply having the Race ending, which hopefully helped yes?
My speculation: It’s tribal arguments as soldiers mentality. Saying something bad (peoples mental health is harmed) about something from “our team” (people promoting awareness of AI x-risk) is viewed negatively. Ideally people on lesswrong know not to treat arguments as soldiers and understand that situations can be multi-faceted, but I’m not sure I believe that is the case.
Two more steel man speculations:
Currently promoting x-risk is very important and people focused on AI Alignment are an extreme minority, so even though it is true that people learning that the the future is in threat causes distress, it is important to let people know. But I note that this perspective shouldn’t limit discussion of how to promote awareness of x-risk while also promoting good emotional well being.
So, my second steel man: You didn’t include anything productive, such as pointing to Mental Health and the Alignment Problem: A Compilation of Resources.
Fwiw, I would love for people promoting AI x-risk awareness to be aware and careful about how the message affects people, and promote resources for peoples well being, but this seems comparably low priory. Currently in computer science there is no obligation for people to swear an oath of ethics like doctors and engineers do, and papers are only obligated to speculate on the benefits of the contents of the paper, not the ethical considerations. It seems like the mental health problems computer science in general are causing, especially social media and AI chatbots, are worse than people hearing that AI is a threat.
So even if I disagree with you, I do value what your saying and think it deserves an explanation, not just downvoting.