Suppose you believe that lying is wrong for deontic reasons. Does it follow that we should program an AI never to lie? If so, can a consequentialist counter with arguments about how that would result in destroying the universe and (assuming those arguments were empirically correct) have a hope of changing your mind?
A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.
Can a deontologist still care about consequences?
Suppose you believe that lying is wrong for deontic reasons. Does it follow that we should program an AI never to lie? If so, can a consequentialist counter with arguments about how that would result in destroying the universe and (assuming those arguments were empirically correct) have a hope of changing your mind?
A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.