Both of my comments were about the thought experiment at the end of the post:
You are given a moral dilemma, either a million people will get an experience worth 100 utility points each, or a million + 1 people will get 99 utility points each. The first option gets you more utility total, but if we take the second option we get one more person served and nobody else can even tell the difference.
Related: https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment