But the problem runs deeper than that. If we draw an arrow in the direction of the deterministic function, we will be drawing an arrow of time from the more refined version of the structure to the coarser version of that structure, which is in the opposite direction of all of our examples.
As I currently understand this after thinking about it for a bit, we are talking about the coarseness of the model from the perspective of the model in the timeframe that it is in and not the time frame that we are in. It would make sense for our predictions of the model to become more coarse with each step forward in time if we are predicting it from a certain time into the future time-space. I don’t know if this makes sense but I would be grateful for a clarification!
Good question, this is rather applied on a system scale level so for example, a democratic system is going to be inherently more reversible than a non-democratic system. An action that goes against the reversibility of a system could for example be the removal of freedom of speech as it would narrow down the potential pathways of future civilizations. Reversibility has an opportunity cost inherent to it as it asks us to take into consideration the possibility of other morals being correct. This is like Pascal’s mugging but with the stakes that if we have the wrong moral theory then we lose a lot. This means that if you have a utilitarian lens it might be less effective as there are actions that might be good from the utilitarian standpoint such as turning everything into hedonium, that are bad from a reversibility standpoint as we can’t change anything from there.