It’s not clear how to compare said risk—“quantum” is far more widely abused—but the creationist AI researcher suggests AI may be severely prone to the problem. Particularly as humans are predisposed to think of minds as ontologically basic, therefore pretty simple, therefore something they can have a meaningful opinion on, regardless of the evidence to the contrary.
What, you mean the part where we’re discussing a field that’s still highly theoretical, with no actual empirical evidence whatsoever, and then determining that it is definitely the biggest threat to humanity imaginable and that anyone who doesn’t acknowledge that is a fool?
What reason do I have to believe that this risk isn’t even stronger when it comes to AI?
It’s not clear how to compare said risk—“quantum” is far more widely abused—but the creationist AI researcher suggests AI may be severely prone to the problem. Particularly as humans are predisposed to think of minds as ontologically basic, therefore pretty simple, therefore something they can have a meaningful opinion on, regardless of the evidence to the contrary.
What, you mean the part where we’re discussing a field that’s still highly theoretical, with no actual empirical evidence whatsoever, and then determining that it is definitely the biggest threat to humanity imaginable and that anyone who doesn’t acknowledge that is a fool?
This is one of the classic straw men, adaptable to any purpose.
Mockery is generally rather adaptable, yes.