I—But would an analysis as sub-agents be better than an analysis as if they were the true agents? And why would they be subagents? Who would be the main causal agent?
II—Conclusions drawn from an analysis of utility theory through methodological individualism.
Ohh yes, that was exactly one of my ideas when formulating this post. An AI alignment has to be designed in such a way as not to consider that society can be understood as a concrete / monolithic concept, but as an abstract one.
The consequences of an AI trying to improve society (as in the case of an agent-type AI) through a social indifference curve could be disastrous (perhaps a Skynet-level scenario...).
An alignment must be done through coordination between individuals. However, this seems to me to be an extremely difficult thing to do.