I actually agree with this for Marxism, but I generally think these are the exceptions, not the rule, and bad ideas like these will often require more subtle counterarguments that the general public won’t notice, so they fall back onto bad criticisms, and it’s here where you need to be careful about not updating based on the counter-arguments being terrible.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
This also applies to AI optimism to a lesser extent.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don’t take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don’t either, an attitude that seems entirely reasonable to me.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
I agree with this, and this is basically why I was saying that you shouldn’t update towards your view being correct based on the general public making bad arguments, because this is the first step towards false beliefs based on selection effects.
It was in a sense part of my motivation for posting this at all.
Supposed experts clearly don’t take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don’t either, an attitude that seems entirely reasonable to me.
This is half correct on how much experts take AI doom seriously, and some experts like Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
My point is that “experts disagree with each other, therefore we’re justified in not taking it seriously” is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I’m far from certain.
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I’m far from certain.
I actually agree with this for Marxism, but I generally think these are the exceptions, not the rule, and bad ideas like these will often require more subtle counterarguments that the general public won’t notice, so they fall back onto bad criticisms, and it’s here where you need to be careful about not updating based on the counter-arguments being terrible.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
This also applies to AI optimism to a lesser extent.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don’t take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don’t either, an attitude that seems entirely reasonable to me.
Thank you, fixed.
I agree with this, and this is basically why I was saying that you shouldn’t update towards your view being correct based on the general public making bad arguments, because this is the first step towards false beliefs based on selection effects.
It was in a sense part of my motivation for posting this at all.
This is half correct on how much experts take AI doom seriously, and some experts like Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
My point is that “experts disagree with each other, therefore we’re justified in not taking it seriously” is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I’m far from certain.
I was referring to the general public here.