It took Robutil longer still to consider that perhaps [...] it can be valuable to prioritize [your own wellbeing and friendships] for their own sake, not just as part of a utilitarian calculus, because trying to justify them in utilitarian terms may be a subtly wrong step in the dance that leaves them hollow.
That is, the point of imagining metaphorical Gods as the embodiments of abstract ideals is to explore the ramifications of those ideals being taken seriously. If you flinch from the logical consequences because the results are “unreasonable” from a human perspective, you’re disrespecting the Gods (except Humo).
Humo and Robutil should definitely compromise and coordinate with each other. (Polytheistic Gods are not omnipotent. Given that Robutil isn’t powerful enough to overtake Humo, kill all humans, and tile the lightcone with utilitronium, then he has no better option but to work with him: humans do generate utilons, even they’re not maximally efficient.) But to depict Robutil as being persuaded that something has value “not just as part of a utilitarian calculus” is silly and shatters the reader’s suspension of disbelief.
I claim that you can, in fact, get more utilons that way. For now.
This is based on hearing of various experiences of people trying to do the naive Level 2 Robutil move of “try to optimize their leisure/etc to get more utilons”, and then finding themselves weirdly fucked up. The claim is that the move “actually, just optimize some things for yourself” works overall better than the move of “try to explicitly evaluate everything in utilons.”
But, it does seem true/important that this is a temporary state of affairs. A fully informed utilitarian with the Utility Textbook From the Future could optimize their leisure/well-being fully for utility, and the tails would come apart and they would do different things than a fully informed “humanist”. The claim here is that a bounded-utilitarian who knows they are bounded and info-limited can eventually recognize they are not yet close enough to a fully fledged theory to try to use the math directly. (This sort of pattern seems common for various “turn things into math” projects).
I agree this is important enough to be part of the OP though, and that the current phrasing is misleading.
It took Robutil longer still to consider that perhaps humans not only need to prioritize their own wellbeing and friendships, but to prioritize them for their own sake?
Why would Robutil consider that? It seems contrary to His Nature as a God. You can’t get more utilons that way!
That is, the point of imagining metaphorical Gods as the embodiments of abstract ideals is to explore the ramifications of those ideals being taken seriously. If you flinch from the logical consequences because the results are “unreasonable” from a human perspective, you’re disrespecting the Gods (except Humo).
Humo and Robutil should definitely compromise and coordinate with each other. (Polytheistic Gods are not omnipotent. Given that Robutil isn’t powerful enough to overtake Humo, kill all humans, and tile the lightcone with utilitronium, then he has no better option but to work with him: humans do generate utilons, even they’re not maximally efficient.) But to depict Robutil as being persuaded that something has value “not just as part of a utilitarian calculus” is silly and shatters the reader’s suspension of disbelief.
I claim that you can, in fact, get more utilons that way. For now.
This is based on hearing of various experiences of people trying to do the naive Level 2 Robutil move of “try to optimize their leisure/etc to get more utilons”, and then finding themselves weirdly fucked up. The claim is that the move “actually, just optimize some things for yourself” works overall better than the move of “try to explicitly evaluate everything in utilons.”
But, it does seem true/important that this is a temporary state of affairs. A fully informed utilitarian with the Utility Textbook From the Future could optimize their leisure/well-being fully for utility, and the tails would come apart and they would do different things than a fully informed “humanist”. The claim here is that a bounded-utilitarian who knows they are bounded and info-limited can eventually recognize they are not yet close enough to a fully fledged theory to try to use the math directly. (This sort of pattern seems common for various “turn things into math” projects).
I agree this is important enough to be part of the OP though, and that the current phrasing is misleading.
It took Robutil longer still to consider that perhaps humans not only need to prioritize their own wellbeing and friendships, but to prioritize them for their own sake?
Oh for sure. When I said “you”, I, uh, was in fact assuming the reader was a human.
I am not under the illusion that self-actualizing AgentGPT who is reading this essay needs to prioritize its wellbeing and friendships.
(but, edited)