I didn’t downvote you, but I have now disagree voted you. I have also upvoted you. Here are my thoughts: --Thanks for your comment. You raise two issues: The effect on people’s mental health, and the legibility of the rigor of the work. You claim that making the rigor more legible e.g. by having skeptics comment approvingly on the piece would help with the mental health problem. --I sympathize with the concerns about people’s mental health. I don’t think it should be my top priority—my top priority should be to figure out what the future might look like, as best as I can, and communicate that clearly—but insofar as I can help people’s mental health without compromising on that, I’m interested. --I don’t see how getting skeptics to comment approvingly would help people’s mental health though. Wouldn’t it make it even worse, because people wouldn’t be able to dismiss AI 2027 so easily? --You may be interested to know that Gary Marcus, a skeptic of short AI timelines, read a draft and liked it, and so did Dean W. Ball, who is more in the e/acc camp and at least until recently had longer AI timelines than me. More generally we had comments/feedback from about a hundred people, from the companies, nonprofits, academia, government, etc. prior to publication. --I don’t claim that AI 2027 has been more polished & reviewed than any climate change scenario. I’m not familiar with the literature on climate change scenarios. I am only familiar with the literature on AI scenarios, and as far as I can tell, AI 2027 is head and shoulders above everything else in terms of rigor/quality. --In the future, we’ll try to do better still. The idea of getting skeptics to critique it and publish their critiques alongside is a good one and we might do some version of that in the future. --Do you have any suggestions for how we could help people react to our work in a healthy rather than unhealthy way? We already did the Slowdown ending, instead of simply having the Race ending, which hopefully helped yes?
I didn’t downvote you, but I have now disagree voted you. I have also upvoted you. Here are my thoughts:
--Thanks for your comment. You raise two issues: The effect on people’s mental health, and the legibility of the rigor of the work. You claim that making the rigor more legible e.g. by having skeptics comment approvingly on the piece would help with the mental health problem.
--I sympathize with the concerns about people’s mental health. I don’t think it should be my top priority—my top priority should be to figure out what the future might look like, as best as I can, and communicate that clearly—but insofar as I can help people’s mental health without compromising on that, I’m interested.
--I don’t see how getting skeptics to comment approvingly would help people’s mental health though. Wouldn’t it make it even worse, because people wouldn’t be able to dismiss AI 2027 so easily?
--You may be interested to know that Gary Marcus, a skeptic of short AI timelines, read a draft and liked it, and so did Dean W. Ball, who is more in the e/acc camp and at least until recently had longer AI timelines than me. More generally we had comments/feedback from about a hundred people, from the companies, nonprofits, academia, government, etc. prior to publication.
--I don’t claim that AI 2027 has been more polished & reviewed than any climate change scenario. I’m not familiar with the literature on climate change scenarios. I am only familiar with the literature on AI scenarios, and as far as I can tell, AI 2027 is head and shoulders above everything else in terms of rigor/quality.
--In the future, we’ll try to do better still. The idea of getting skeptics to critique it and publish their critiques alongside is a good one and we might do some version of that in the future.
--Do you have any suggestions for how we could help people react to our work in a healthy rather than unhealthy way? We already did the Slowdown ending, instead of simply having the Race ending, which hopefully helped yes?