Of course if you look at the probability of the whole quest succeeding, it seems small either way, and distinguishing between different small probabilities is hard. But if you look at individual steps, we’ve made small but solid steps toward understanding “mindless” AI alignment (like the concept of adversarial examples), but no comparable advances in understanding “mindful” AI. So to me the weight of evidence is against your position.
Of course if you look at the probability of the whole quest succeeding, it seems small either way, and distinguishing between different small probabilities is hard. But if you look at individual steps, we’ve made small but solid steps toward understanding “mindless” AI alignment (like the concept of adversarial examples), but no comparable advances in understanding “mindful” AI. So to me the weight of evidence is against your position.