There is a certain story, probably common for many LWers: first, you learn about spherical in vacuum perfect reasoning, like Solomonoff induction/AIXI. AIXI takes all possible hypotheses, predicts all possible consequences of all possible actions, weights all hypotheses by probability and computes optimal action by choosing one with the maximal expected value. Then, it’s not usually even told, it is implied in a very loud way, that this method of thinking is computationally untractable at best and uncomputable at worst and you need to do clever shortcuts. This is true in general, but approach “just list out all the possibilities and consider all the consequences (inside certain subset)” gets neglected as a result.
For example, when I try to solve puzzle from “Baba is You” and then try to analyze how I would be able to solve it faster, I usually come up to “I should have just write down all pairwise interactions between the objects to notice which one will lead to solution”.
There is a certain story, probably common for many LWers: first, you learn about spherical in vacuum perfect reasoning, like Solomonoff induction/AIXI. AIXI takes all possible hypotheses, predicts all possible consequences of all possible actions, weights all hypotheses by probability and computes optimal action by choosing one with the maximal expected value. Then, it’s not usually even told, it is implied in a very loud way, that this method of thinking is computationally untractable at best and uncomputable at worst and you need to do clever shortcuts. This is true in general, but approach “just list out all the possibilities and consider all the consequences (inside certain subset)” gets neglected as a result.
For example, when I try to solve puzzle from “Baba is You” and then try to analyze how I would be able to solve it faster, I usually come up to “I should have just write down all pairwise interactions between the objects to notice which one will lead to solution”.