One way to construct a (sometimes irrational) agent is to assign the agent’s decision making to a committee of perfectly rational agents—each with its own utility function.
Whether the decision making is done by a voting scheme or by casting lots to pick the current committee chairman, the decision making will be occasionally irrational, and hence not suitable to be described by a utility function.
However, if the agents’ individual utility functions are common knowledge, then Nash bargaining may provide a way for the committee to combine their divergent preferences into a single, harmonious, rational composite utility function.
Hm, that’s an interesting thought. Of course, we’re not talking about a council of a few voices in your head here. Voting theory results, which totally slipped my mind when writing the post, tell you that the number of voters with linear rankings needed grows something like log(N), where N is the number options you want to rank arbitrarily.
Our N is huge, so we may actually be describing the same things—I’m just calling them “rules,” and you’re calling them “agents,” essentially—although my thinking isn’t using the voting framework, so the actual implications are slightly different.
One way to construct a (sometimes irrational) agent is to assign the agent’s decision making to a committee of perfectly rational agents—each with its own utility function.
Whether the decision making is done by a voting scheme or by casting lots to pick the current committee chairman, the decision making will be occasionally irrational, and hence not suitable to be described by a utility function.
However, if the agents’ individual utility functions are common knowledge, then Nash bargaining may provide a way for the committee to combine their divergent preferences into a single, harmonious, rational composite utility function.
Hm, that’s an interesting thought. Of course, we’re not talking about a council of a few voices in your head here. Voting theory results, which totally slipped my mind when writing the post, tell you that the number of voters with linear rankings needed grows something like log(N), where N is the number options you want to rank arbitrarily.
Our N is huge, so we may actually be describing the same things—I’m just calling them “rules,” and you’re calling them “agents,” essentially—although my thinking isn’t using the voting framework, so the actual implications are slightly different.