Yep, that’s the biggest issue I have with my own side of the debate on AI risk, in that quite often, they don’t even try to state why it isn’t a risk, and instead appeal to social authority, and while social authority is evidence, it’s too easy to filter that evidence a lot to be useful.
To be frank, I don’t blame a lot of the AI risk people for not being convinced that we aren’t doomed, even though reality doesn’t grade on a curve, the unsoundness of the current arguments against doom don’t help, and it is in fact bad that my side keeps doing this.
Yep, that’s the biggest issue I have with my own side of the debate on AI risk, in that quite often, they don’t even try to state why it isn’t a risk, and instead appeal to social authority, and while social authority is evidence, it’s too easy to filter that evidence a lot to be useful.
To be frank, I don’t blame a lot of the AI risk people for not being convinced that we aren’t doomed, even though reality doesn’t grade on a curve, the unsoundness of the current arguments against doom don’t help, and it is in fact bad that my side keeps doing this.