It seems to me that AI will need to think about impossible worlds anyway—for counterfactuals, logical uncertainty, and logical updatelessness/trade. That includes worlds that are hard to simulate, e.g. “what if I try researching theory X and it turns out to be useless for goal Y?” So “logical doors” aren’t that unlikely.
It seems to me that AI will need to think about impossible worlds anyway—for counterfactuals, logical uncertainty, and logical updatelessness/trade. That includes worlds that are hard to simulate, e.g. “what if I try researching theory X and it turns out to be useless for goal Y?” So “logical doors” aren’t that unlikely.