And this also means that the machine needs to discount (its altruistic interest in) human welfare at the same rate as human do—otherwise, if it discounts faster, then it can threaten human with a horrible future (since it cares only about the human present). Or if it temporally discounts human happiness much slower than do humans, it will be able to threaten to delay human gratification.
If a machine wants for humans what the humans want for themselves, it wants to discount that stuff the way they like it. That doesn’t imply that it has any temporal discounting in its utility function—it is just using a moral mirror.
If a machine wants for humans what the humans want for themselves, it wants to discount that stuff the way they like it. That doesn’t imply that it has any temporal discounting in its utility function—it is just using a moral mirror.