Logical uncertainty, which is unavoidable no matter how smart you are, blurs the line. AI won’t “understand” expected utility maximization completely either, it won’t see all the implications no matter how much computational resources it has. And so it needs more heuristics to guide its decisions where it can’t figure out all the implications. Those are the counterparts of deontological injunctions, although of course they must be subject to revision on sufficient reflection (and what “sufficient” means is one of these injunctions, also subject to revision). Some of then will even have normative implications, in fact that’s once reason preference is not utility function.
Logical uncertainty, which is unavoidable no matter how smart you are, blurs the line. AI won’t “understand” expected utility maximization completely either, it won’t see all the implications no matter how much computational resources it has. And so it needs more heuristics to guide its decisions where it can’t figure out all the implications. Those are the counterparts of deontological injunctions, although of course they must be subject to revision on sufficient reflection (and what “sufficient” means is one of these injunctions, also subject to revision). Some of then will even have normative implications, in fact that’s once reason preference is not utility function.