I suspect the situation is vastly more complicated than that. Revealed preferences show the contradiction between stated preferences and actions. Misaligned incentives models (a part of) a person as a separate agent with distinct short-term goals. But humans are not modeled well as a collection of agents. We are a messy result of billions of years of evolution, with some random mutations becoming meta-stable through sheer chance. All human behavior is a side effect of that. Certainly both RPT and MIT can be a rough starting point, and if someone actually numerically simulates human behavior, the two could be some of the algorithms to use. But I am skeptical they together would explain/predict a significant fraction of what we do.
As far as a lack of predictive ability I think you’re right. I’m more just trying to draw out a common dichotomy that comes up in certain kinds of discussions about how we ought to spend our time.
For example, some people enjoy playing video games but then occasionally feel vaguely ashamed that they didn’t spend their time doing something more productive. In these cases, they might be unsure about which view to take: RPT says that the time they spent reveals their true preference for video games, but MIT says that they fell prey to the short-term incentive structure. If we want to achieve long term goals, this becomes an issue because we want to be true to our nature, and not insist on lying about our goals, but we also want to be able to avoid local maxima in the form of quick pleasure.