As someone who has no deontologist friends, should I bother reading this post?
Yes. If (for example) some well meaning fool makes a utilitarian-friendly AI then there will be a super-intelligence at large that will be maximizing “aggregative total equal-consideration preference consequentialism” across all living humans. Being able to understand how deontologist’s think will better enable you to predict how their deontological beliefs will be resolved to preferences by the utilitarian AI. It may be that the best preference translation of a typical deontologist belief system turns out to be something that gives rise in aggregate to a dystopia. If that is the case you should engage in the mass murder of deontologists before the run button is pressed on the AI.
I also note that as I wrote “you should engage in mass murder” I felt bad. This is despite the fact that the act has extremely good expected consequences in the hypothetical situation. Part of the ‘bad feeling’ I get for saying that is due to inbuilt deontological tendencies and part is because my intuitions anticipate negative social consequences for making such a statement due to the deontological ethical beliefs being more socially rewarded. Both of these are also reasons that reading the post and understanding the reasoning that deontologists use may turn out to be useful.
Yes. If (for example) some well meaning fool makes a utilitarian-friendly AI then there will be a super-intelligence at large that will be maximizing “aggregative total equal-consideration preference consequentialism” across all living humans. Being able to understand how deontologist’s think will better enable you to predict how their deontological beliefs will be resolved to preferences by the utilitarian AI. It may be that the best preference translation of a typical deontologist belief system turns out to be something that gives rise in aggregate to a dystopia. If that is the case you should engage in the mass murder of deontologists before the run button is pressed on the AI.
I also note that as I wrote “you should engage in mass murder” I felt bad. This is despite the fact that the act has extremely good expected consequences in the hypothetical situation. Part of the ‘bad feeling’ I get for saying that is due to inbuilt deontological tendencies and part is because my intuitions anticipate negative social consequences for making such a statement due to the deontological ethical beliefs being more socially rewarded. Both of these are also reasons that reading the post and understanding the reasoning that deontologists use may turn out to be useful.