To motivate the mathematical sections that follow, let’s consider a toy problem. Say that we’ve designed Deep Thought 1.0, an AI that reasons about its possible actions and only takes actions that it can show to have good consequences on balance. One such action is designing a successor, Deep Thought 2.0, which has improved deductive abilities. But if Deep Thought 1.0 (hereafter called DT1) is to actually build Deep Thought 2.0 (DT2), DT1 must first conclude that building DT2 will have good consequences on balance.
Why do we think that agents work deductively? Nothing in the AI or cog-sci literature indicates this, and as an “easy case” for simplified treatment, it seems in fact to expand out to the point of seeming impossibility. Are we attacking deductive-only agents because they appear to be sick with a Hard Problem of Vingeian Reasoning?
Why do we think that agents work deductively? Nothing in the AI or cog-sci literature indicates this, and as an “easy case” for simplified treatment, it seems in fact to expand out to the point of seeming impossibility. Are we attacking deductive-only agents because they appear to be sick with a Hard Problem of Vingeian Reasoning?