I think we can go a bit farther in predicting that backwards causation will be a useful concept in some very specific cases, which will break down far above the scale of the normal second law.
We “see” backwards causation when we know the outcome but not how the system will get there. What does this behavior sound like a hallmark of? Optimization processes! We can predict in advance that backwards causation will be a useful idea to talk about the behavior of some optimization processes, but that it will stop contributing useful information when we want to zoom in past the “intentional stance” level of description.