I’m glad you wrote this, I very much feel the same way but I wasn’t sure how to put it. It feels like many reviewers—the ones who agree that AI x-risk is a big deal, but spent 90% of the review criticizing the book—are treating this like an abstract philosophical debate. ASI risk is a real thing that has a serious chance of causing the extinction of humanity.
Like, I don’t want to say you’re not allowed to disagree. So I’m not sure how to express my thoughts. But I think it’s crazy to believe AI x-risk is a massive problem, and then spend most of your words talking about how the problem is being overstated by this particular group of people.
I feel like every time I write a comment, I have to add a caveat about how I’m not as doomy as MIRI and I somewhat disagree with their predictions. But like, I don’t actually think that matters. If you think there’s a 5% or 20% chance of extinction from ASI, you should be sounding the alarm just as loudly as MIRI is! Or maybe 75% as loudly or something. But not 20% as loudly—how much you should care about raising concern for ASI is not a linear function of your P(doom).
I’m glad you wrote this, I very much feel the same way but I wasn’t sure how to put it. It feels like many reviewers—the ones who agree that AI x-risk is a big deal, but spent 90% of the review criticizing the book—are treating this like an abstract philosophical debate. ASI risk is a real thing that has a serious chance of causing the extinction of humanity.
Like, I don’t want to say you’re not allowed to disagree. So I’m not sure how to express my thoughts. But I think it’s crazy to believe AI x-risk is a massive problem, and then spend most of your words talking about how the problem is being overstated by this particular group of people.
I feel like every time I write a comment, I have to add a caveat about how I’m not as doomy as MIRI and I somewhat disagree with their predictions. But like, I don’t actually think that matters. If you think there’s a 5% or 20% chance of extinction from ASI, you should be sounding the alarm just as loudly as MIRI is! Or maybe 75% as loudly or something. But not 20% as loudly—how much you should care about raising concern for ASI is not a linear function of your P(doom).