Thank you for everything you did. My experience in this world has been a lot better since I discovered your writings, and while I agree with your assessment on the likely future, and I assume you have better things to spend your time doing than reading random comments, I still wanted to say that.
I’m curious to see what exactly the future brings. Whilst the result of the game may be certain, I can’t predict the exact moves.
Enjoy it while it lasts, friends.
(Not saying give up, obviously.)
It seems to me that the agents you are considering don’t have as complex a utility function as people, who seem to at least in part consider their own well being as part of their utility funciton. Additionally, people usually don’t have a clear idea of what their actual utility function is, so if they want to go all-in on it, they let some values fall by the wayside. AFAIK this limitation not a requirement for an agent.
If you had your utility function fully specified, I don’t think you could be considered both rational and also not a “holy madman”. (This borders on my answer to the question of free will, which so far as I can tell, is a question that should not explicitly be answered, so as to not spoil it for anyone who wants to figure it out for themselves.)
Suffice it to say that optimized/optimal function should be a convergent instrumental goal, similar to self-preservation, and a rational agent should thereby have it as a goal. If I am not mistaken, this means that a problem in work-life balance, as you put it, is not something that an actual rational agent would tolerate, provided there are options to choose from that don’t include this problem and have a similar return otherwise.
Or did I misinterpret what you wrote? I can be dense sometimes...^^