I like the phrase “myopic consequentialism” for this, and it often has bad consequences because bounded agents need to cultivate virtues (distilled patterns that work well across many situations, even when you don’t have the compute or information on exactly why it’s good in many of those) to do well rather than trying to brute-force search in a large universe.
I personally find the “virtue is good because bounded optimization is too hard” framing less valuable/persuasive than the “virtue is good because your own brain and those of other agents are trying to trick you” framing. Basically, the adversarial dynamics seem key in these situations, otherwise a better heuristic might be to focus on the highest order bit first and then go down the importance ladder.
Though of course both are relevant parts of the story here.
I like the phrase “myopic consequentialism” for this, and it often has bad consequences because bounded agents need to cultivate virtues (distilled patterns that work well across many situations, even when you don’t have the compute or information on exactly why it’s good in many of those) to do well rather than trying to brute-force search in a large universe.
I personally find the “virtue is good because bounded optimization is too hard” framing less valuable/persuasive than the “virtue is good because your own brain and those of other agents are trying to trick you” framing. Basically, the adversarial dynamics seem key in these situations, otherwise a better heuristic might be to focus on the highest order bit first and then go down the importance ladder.
Though of course both are relevant parts of the story here.