Non X-risks from AI are still intrinsically important AI safety issues.
I want to push back on this—I think it’s true as stated, but that emphasising it can be misleading.
Concretely, I think that there can be important near-term, non-X-risk AI problems that meet the priority bar to work on. But the standard EA mindset of importance, tractability and neglectedness still applies. And I think often near-term problems are salient and politically charged, in a way that makes these harder to evaluate.
I think these are most justified on problems with products that are very widely used and without much corporate incentive to fix the issues (recommender system alignment is the most obvious example here)
I broadly agree with and appreciate the rest of this post though! And want to distinguish between “this is not a cause area that I think EAs should push on on the margin” and “this cause area does not matter”—I think work to make systems less deceptive, racist, and otherwise harmful seems pretty great.
No disagreements substance-wise. But I’d add that I think work to avoid scary autonomous weapons is likely at least as important as recommender systems. If this post’s reason #1 were the only reason for working on nerartermist AI stuff, then it would probably be like a lot of other very worthy but likely not top-tier impactful issues. But I see it as emphasis-worthy icing on the cake given #2 and #3.
I want to push back on this—I think it’s true as stated, but that emphasising it can be misleading.
Concretely, I think that there can be important near-term, non-X-risk AI problems that meet the priority bar to work on. But the standard EA mindset of importance, tractability and neglectedness still applies. And I think often near-term problems are salient and politically charged, in a way that makes these harder to evaluate.
I think these are most justified on problems with products that are very widely used and without much corporate incentive to fix the issues (recommender system alignment is the most obvious example here)
I broadly agree with and appreciate the rest of this post though! And want to distinguish between “this is not a cause area that I think EAs should push on on the margin” and “this cause area does not matter”—I think work to make systems less deceptive, racist, and otherwise harmful seems pretty great.
No disagreements substance-wise. But I’d add that I think work to avoid scary autonomous weapons is likely at least as important as recommender systems. If this post’s reason #1 were the only reason for working on nerartermist AI stuff, then it would probably be like a lot of other very worthy but likely not top-tier impactful issues. But I see it as emphasis-worthy icing on the cake given #2 and #3.
Cool, agreed. Maybe my main objection is just that I’d have put it last not first, but this is a nit-pick