Ohh yes, that was exactly one of my ideas when formulating this post. An AI alignment has to be designed in such a way as not to consider that society can be understood as a concrete / monolithic concept, but as an abstract one.
The consequences of an AI trying to improve society (as in the case of an agent-type AI) through a social indifference curve could be disastrous (perhaps a Skynet-level scenario...).
An alignment must be done through coordination between individuals. However, this seems to me to be an extremely difficult thing to do.
I—But would an analysis as sub-agents be better than an analysis as if they were the true agents? And why would they be subagents? Who would be the main causal agent?
II—Conclusions drawn from an analysis of utility theory through methodological individualism.
But if individuals are not the basic fragments (the units to which we can most reduce our analysis), then what?
In my view, we would start to enter into psychobiological investigations of how, for example, genes make choices. However, if we were to reduce it to that level, as David Friedman rightly observed, the conclusions would be the same...
The problem is that a holistic abstraction like “the society” is less effective in describing a picture closer to reality than the ideal type of methodological individualism (“the individual”). Reducing it to this fundamental fragment is much better at describing the processes that actually occur in so-called social phenomena.And yes, Democracy is the system that best captures this notion that it is not possible to achieve a total improvement of the society.
It is certainly possible that there are ways to improve the situation of more than one person, given that non-zero-sum games exist. The problem, as noted by Elinor Ostrom in her analysis of the governance of the commons (Ostrom 1990, ch 5), is that increasing social complexity (e.g. bringing more agents with different preferences into the game) makes alignment between players less and less likely.