not selfish and unselfish components in their utility function, but parts of themselves in some less Law-aspiring way than that
Utility functions don’t model all agents; we should look at a larger space. I expect it to better model not just a human but also a council of humans or a multiverse of acausal traders. I expect this also to say how an AGI should handle uncertainty about preferences.
There should be a natural way to aggregate a distribution of agents into an agent, obeying the obvious law that an arbitrarily deeply nested distribution comes out the same way no matter which order you aggregate its layers in.
The free generalization, of course, is an agent as a distribution of utility functions. The aggregate is then simply a flattening of nested distributions. This does not tell us how an agent makes decisions—we don’t just take an expectation, because that’s not invariant under u and 2u being equivalent utility functions.
We might need to replace the usual distribution monad with another.
Given agents a1,a2 and numbers p1,p2 such that p1+p2=1, there is an aggregate agent called p1a1+p2a2 which means “agents a1 and a2 acting together as a group, in which the relative power of a1 versus a2 is the ratio of p1 to p2”. The group does not make decisions by combining their utility functions, but instead by negotiating or fighting or something.
Aggregation should be associative, so 13a1+23(12a2+12a3)=13a1+13a2+13a3=23(12a1+12a2)+13a3.
If you spell out all the associativity relations, you’ll find that aggregation of agents is an algebra over the operad of topological simplices. (See Example 2 https://arxiv.org/abs/2107.09581.)
Of course we still have the old VNM-rational utility-maximizing agents. But now we also have aggregates of such agents, which are “less Law-aspiring” than their parts.
In order to specify the behavior of an aggregate, we might need more data than the component agents ai and their relative power pi. In that case we’d use some other operad.
Utility functions don’t model all agents; we should look at a larger space. I expect it to better model not just a human but also a council of humans or a multiverse of acausal traders. I expect this also to say how an AGI should handle uncertainty about preferences.
There should be a natural way to aggregate a distribution of agents into an agent, obeying the obvious law that an arbitrarily deeply nested distribution comes out the same way no matter which order you aggregate its layers in.
The free generalization, of course, is an agent as a distribution of utility functions. The aggregate is then simply a flattening of nested distributions. This does not tell us how an agent makes decisions—we don’t just take an expectation, because that’s not invariant under u and 2u being equivalent utility functions.
We might need to replace the usual distribution monad with another.
Ah, great! To fill in some of the details:
Given agents a1,a2 and numbers p1,p2 such that p1+p2=1, there is an aggregate agent called p1a1+p2a2 which means “agents a1 and a2 acting together as a group, in which the relative power of a1 versus a2 is the ratio of p1 to p2”. The group does not make decisions by combining their utility functions, but instead by negotiating or fighting or something.
Aggregation should be associative, so 13a1+23(12a2+12a3)=13a1+13a2+13a3=23(12a1+12a2)+13a3.
If you spell out all the associativity relations, you’ll find that aggregation of agents is an algebra over the operad of topological simplices. (See Example 2 https://arxiv.org/abs/2107.09581.)
Of course we still have the old VNM-rational utility-maximizing agents. But now we also have aggregates of such agents, which are “less Law-aspiring” than their parts.
In order to specify the behavior of an aggregate, we might need more data than the component agents ai and their relative power pi. In that case we’d use some other operad.