I just don’t know whether I agree with your assertion that eg AUP “defines” what not to do.
I think I mostly meant that it is not learned.
I kind of want to argue that this means the effect of not-learned things can be traced back to researcher’s brains, rather than to experience with the real world. But that’s not exactly right, because the actual impact penalty can depend on properties of the world, even if it doesn’t use learning.
How pessimistic are you about this concern for this idea?
I don’t know; it feels too early to say. I think if the norms end up in some hardcoded form such that they never update over time, nearest unblocked strategies feel very likely. If the norms are evolving over time, then it might be fine. The norms would need to evolve at the same “rate” as the rate at which the world changes.