In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I’d like to once again recommend Hervé Moulin’s Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I’m afraid. I’m not even sure whether I’d care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don’t know.)
In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I’d like to once again recommend Hervé Moulin’s Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I’m afraid. I’m not even sure whether I’d care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don’t know.)