No, what I am saying is that humans judge things to be more different when the difference will have important real world consequences in the future. Consider two cases, one where the water will be tipped into the pool later, and the other where the water will be tipped into a nuclear reactor, which will explode if the salt isn’t quite right.
There need not be any difference in the bucket or water whatsoever. While the current bucket states look the same, there is a noticeable macro-state difference between nuclear reactor exploding and not exploding, in a way that there isn’t a macrostate difference between marginally different eddy currents in the pool. I was specifying a weird info theoretic definition of significance that made this work, but just saying that the more energy is involved, the more significant works too. Nowhere are we referring to human judgement, we are referring to hypothetical future consequences.
Actually the rule, your action and its reversal should not make a difference worth tracking in its world model, would work ok here. (Assuming sensible Value of info). The rule that it shouldn’t knowably affect large amounts of energy is good too. So for example it can shuffle an already well shuffled pack of cards, even if the order of those cards will have some huge effect. It can act freely without worrying about weather chaos effects, the chance of it causing a hurricane counterbalanced by the chance of stopping one. But if it figures out how to twitch its elbow in just the right way to cause a hurricane, it can’t do that. This robot won’t tip the nuclear bucket, for much the same reason. It also can’t make a nanobot that would grey goo earth, or hack into nukes to explode them. All these actions effect a large amount of energy in a predictable direction.