This.
In particular imagine if the state space of the MDP factors into three variables x, y and z, and the agent has a bunch of actions with complicated influence on x, y and z but also just some actions that override y directly with a given value.
In some such MDPs, you might want a policy that does nothing other than copy a specific function of x to y. This policy could easily be seen as a virtue, e.g. if x is some type of event and y is some logging or broadcasting input, then it would be a sort of information-sharing virtue.
While there are certain circumstances where consequentialism can specify this virtue, it’s quite difficult to do in general. (E.g. you can’t just minimize the difference between f(x) and y because then it might manipulate x instead of y.)
The methods for converting policies to utility functions assume no systematic errors, which doesn’t seem compatible with varying the intelligence levels.