That’s fair, but the deontological argument doesn’t work for anyone building the extinction machine who is unconvinced by x-risk arguments, or deludes themselves that it’s not actually an extinction machine, or that extinction is extremely unlikely, or that the extinction machine is the only thing that can prevent extinction (as in all the alignment via AI proposals) etc. etc.
That’s fair, but the deontological argument doesn’t work for anyone building the extinction machine who is unconvinced by x-risk arguments, or deludes themselves that it’s not actually an extinction machine, or that extinction is extremely unlikely, or that the extinction machine is the only thing that can prevent extinction (as in all the alignment via AI proposals) etc. etc.
This is not the case for many at Anthropic.
True; in general, many people who behave poorly do not know that they do so.