The authors employ the “Ought-Can” principle to defend their assumption that the space of possible worlds should be treated as finite:
Ought-Can: A norm should not demand anything of an agent that is beyond her
epistemic reach.
Their argument is essentially this: we (humans) can only divide the space of logical possibilities into finitely many options, so by Ought-Can we do not demand an infinite set of possible worlds.
This is a bit misguided. They should first ask, what is the right answer, cognitive resources be damned? E.g., what is the true probability that an apple will fall on my head tomorrow? Even if this answer is impossible to exactly compute, we need to know that it exists in principle so that we can approximate it in some way. As it stands, they have approximated the true answer, but we don’t know what the true answer even looks like, so it’s impossible to evaluate how close their approximation is/can be.
(This seems like the sort of mistake you make if you aren’t thinking in the back of your head “how would I program an AI to use this epistemology?”)
EDIT: At the end of the paper the authors admit that they do need to look into the infinite case so the problem isn’t as bad as I initially thought—this paper looks more like tackling a simple case before going after the fully general proof.
Commenting on the fly:
The authors employ the “Ought-Can” principle to defend their assumption that the space of possible worlds should be treated as finite:
Their argument is essentially this: we (humans) can only divide the space of logical possibilities into finitely many options, so by Ought-Can we do not demand an infinite set of possible worlds.
This is a bit misguided. They should first ask, what is the right answer, cognitive resources be damned? E.g., what is the true probability that an apple will fall on my head tomorrow? Even if this answer is impossible to exactly compute, we need to know that it exists in principle so that we can approximate it in some way. As it stands, they have approximated the true answer, but we don’t know what the true answer even looks like, so it’s impossible to evaluate how close their approximation is/can be.
(This seems like the sort of mistake you make if you aren’t thinking in the back of your head “how would I program an AI to use this epistemology?”)
EDIT: At the end of the paper the authors admit that they do need to look into the infinite case so the problem isn’t as bad as I initially thought—this paper looks more like tackling a simple case before going after the fully general proof.