Another good reason for this is the subsystem alignment problem. Suppose you are trying to answer a question, but you’re not quite sure how. You can run some extra computations to help you. What computations do you run? Well, considering the myopic/non-myopic spectrum, you either want to run an extremely non-myopic computation, or an extremely myopic one (because those are Good, and anything else runs the risk of Evil).
In contrast (or, rather, in addition) there could be myopic supersystems constructed from non-myopic agents. Human Science is one example—scientists want lots of different things: money, power, status, respect, (I’m excluding curiosity for the sake of the argument) and are making plans to achieve them, but the system of incentives is set up in a way such that a result is an accurate reflection of reality (or at least that’s the idea, real science is not that perfect).
In contrast (or, rather, in addition) there could be myopic supersystems constructed from non-myopic agents. Human Science is one example—scientists want lots of different things: money, power, status, respect, (I’m excluding curiosity for the sake of the argument) and are making plans to achieve them, but the system of incentives is set up in a way such that a result is an accurate reflection of reality (or at least that’s the idea, real science is not that perfect).