At some point, a temperature control system needs to take actions to control the temperature. Choosing the correct action depends on responding to what the temperature actually is, not what you want it to be, or what you expect it to be after you take the (not-yet-determined) correct action.
If you are picking your action based on predictions, you need to make conditional predictions based on different actions you might take, so that you can pick the action whose conditional prediction is closer to the target. And this means your conditional predictions can’t all be “it will be the target temperature”, because that wouldn’t let you differentiate good actions from bad actions.
It is possible to build an effective temperature control system that doesn’t involve predictions at all; you can precompute a strategy (like “turn heater on below X temp, turn it off above Y temp”) and program the control system to execute that strategy without it understanding how the strategy was generated, and in that case it might not have models or make predictions at all. But if you were going to rely on predictions to pick the correct action, it would be necessary to make some (conditional) predictions that are not simply “I will succeed”.
From tone and context, I am guessing that you intend for this to sound like motivated reasoning, even though it doesn’t particularly remind me of motivated reasoning. (I am annoyed that you are forcing me to guess what your intended point is.)
I think the key characteristic of motivated reasoning is that you ignore some knowledge or model that you would ordinarily employ while under less pressure. If you stay up late playing Civ because you simply never had a model saying that you need a certain amount of sleep in order to feel rested, then that’s not motivated reasoning, it’s just ignorance. It only counts as motivated reasoning if you, yourself would ordinarily reason that you need a certain amount of sleep in order to feel rested, but you are temporarily suspending that ordinary reasoning because you dislike its current consequences.
(And I think this is how most people use the term.)
So, imagine a scenario where you need 100J to reach your desired temp but your heating element can only safely output 50J.
If you were to choose to intentionally output only 50J, while predicting that this would somehow reach the desired temperature (contrary to the model you regularly employ in more tractable situations), then I would consider that a central example of motivated reasoning. But your model does not seem to me to explain how this strategy arises.
Rather, you seem to be describing a reaction where you try to output 100J, meaning you are choosing an action that is actually powerful enough to accomplish your goal, but which will have undesirable side-effects. This strikes me as a different failure mode, which I might describe as “tunnel vision” or “obsession”.
I suppose if your heating element is in fact incapable of outputting 100J (even if you allow side-effects), and you are aware of this limitation, and you choose to ask for 100J anyway, while expecting this to somehow generate 100J (directly contra the knowledge we just assumed you have), then that would count as motivated reasoning. But I don’t think your analogy is capable of representing a scenario like this, because you are inferring the controller’s “expectations” purely from its actions, and this type of inference doesn’t allow you to distinguish between “the controller is unaware that its heating element can’t output 100J” from “the controller is aware, but choosing to pretend otherwise”. (At least, not without greatly complicating the example and considering controllers with incoherent strategies.)
Meta-level feedback: I feel like your very long comment has wasted a lot of my time in order to show off your mastery of your own field in ways that weren’t important to the conversation; e.g. the stuff about needing to react faster than the thermometer never went anywhere that I could see, and I think your 5-paragraph clarification that you are interpreting the controller’s actions as implied predictions could have been condensed to about 3 sentences. If your comments continue to give me similar feelings, then I will stop reading them.