Re: Ben’s question but much lower-level, there’s some extent to which a logical inductor keeps having to accept actual failures sometimes as the price of preventing spurious counterfactuals, where in our world we model the consequences of certain actions without ever doing them.
It’s a free lunch type of thing; we assume our world has way more structure than just “the universe is a computable environment”, so that we can extrapolate reliably in many cases. Strictly logically, we can’t assume that- the universe could be set up to violate physics and reward you personally if you dive into a sewer, but you’ll never find that out because that’s an action you won’t voluntarily take.
So if the universe appears to run by simple rules, you can often do without exploration; but if you can’t assume it is and always will be run by those rules, then you need to accept failures as the price of knowledge.
Something about this feels compelling… I need to do some empiricism to understand what my counterfactuals are. By the time a real human gets to the 5-and-10 problem they’ve done enough, but I’d you just appear in a universe and it‘s your first experience, I’m not too surprised you need to actually check these fundamentals.
(I’m not sure if this actually matches up philosophically with the logical inductors.)