A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.
A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.