What about when there are agents with difference source codes and different preferences? The result here suggests that one of our big unsolved problems, that of generally deriving a “good and fair” global outcome from agents optimizing their own preferences while taking logical correlations into consideration, may be unsolvable, since consideration of logical correlations does not seem powerful enough to always obtain a “good and fair” global outcome even in the single-player case.
I don’t understand this statement. What do you mean by “logical correlations”, and how does this post demonstrate that they are insufficient for getting the right solution?
I don’t understand this statement. What do you mean by “logical correlations”, and how does this post demonstrate that they are insufficient for getting the right solution?