There are three forces/reasons pushing very long-sighted ambitious agents/computations to make use of very short-sighted, unambitious agents/computations:
1. Schelling points for acausal coordination
2. Epistemology works best if you just myopically focus on answering correctly whatever question is in front of you, rather than e.g. trying to optimize your long-run average correctness or something.
3. Short-sighted, unambitious agents/computations are less of a threat, more easily controlled.
I’ll ignore 1 for now. For 2 and 3, I think I understand what you are saying on a vibes/intuitive level, but I don’t trust my vibes/intuition enough. I’d like to see 2 and 3 spelled out and justified more rigorously. IIRC there’s an academic philosophy literature on 2, but I’m afraid I don’t remember it much. Are you familiar with it? As for 3, here’s a counterpoint: Yes, more longsighted agents/computations carry with them risk of various kinds of Evil. But, the super myopic ones also have their drawbacks. (Example: Sometimes you’d rather have a bureaucracy staffed with somewhat agentic people who can make exceptions to the rules when needed, than a bureaucracy staffed with apathetic rule-followers.) You haven’t really argued that the cost-benefit analysis systematically favors delegating to myopic agents/computations.
IIUC the core of this post is the following:
There are three forces/reasons pushing very long-sighted ambitious agents/computations to make use of very short-sighted, unambitious agents/computations:
1. Schelling points for acausal coordination
2. Epistemology works best if you just myopically focus on answering correctly whatever question is in front of you, rather than e.g. trying to optimize your long-run average correctness or something.
3. Short-sighted, unambitious agents/computations are less of a threat, more easily controlled.
I’ll ignore 1 for now. For 2 and 3, I think I understand what you are saying on a vibes/intuitive level, but I don’t trust my vibes/intuition enough. I’d like to see 2 and 3 spelled out and justified more rigorously. IIRC there’s an academic philosophy literature on 2, but I’m afraid I don’t remember it much. Are you familiar with it? As for 3, here’s a counterpoint: Yes, more longsighted agents/computations carry with them risk of various kinds of Evil. But, the super myopic ones also have their drawbacks. (Example: Sometimes you’d rather have a bureaucracy staffed with somewhat agentic people who can make exceptions to the rules when needed, than a bureaucracy staffed with apathetic rule-followers.) You haven’t really argued that the cost-benefit analysis systematically favors delegating to myopic agents/computations.