Hmm, Looks like I should add an examples section and more background on what I mean related to freedom. What you are describing sounds like a traffic system that values ergodic efficiency of it’s managed network and you are showing a way that a participant can have very non-ergodic results. It sounds like that is more of an engineering problem than what I’m imagining.
Examples off the top of my head of what I mean with respect to loss of freedom resulting from a powerful agent’s value system include things like:
paperclip maximizer terraforming the earth prevents any value-systems other than paperclip maximization from sharing the earth’s environment.
human’s value for cheap foodstuffs results in mono-culture crop fields, which cuts off forest grassland ecosystem’s values, (hiding places, alternating food stuffs which last through the seasons, etc.)
Drug dependent parent changes a child’s environment, preventing freedom for a reliable schedule, security, etc.
Or, riffing off your example: superintelligent traffic controller starts city-planning, bulldozing blocks of car-free neighborhoods because they stood in the way of a 5% city-wide traffic flow improvement
Essentially what I’m trying to describe is that freedoms need to be a value onto themselves that has certain characteristics that are functionally different than the common utility function terminology that revolves around metric maximization (like gradient descent). Freedoms describe boundary conditions within which metric maximization is allowed, but describe steep penalties for surpassing their bounds. Their general mathematical form is a manifold surrounding some state-space, whereas it seems the general form of most utility function talk is finding a minima/maxima of some state space.
Ah ok, I think I’m following you. To me, freedom describes a kind of bubble around a certain physical or abstract dimension, who’s center is at another agent. It’s main use is to limit computational complexity when sharing an environment with other agents. If I have a set of freedom values, I don’t have to infer the values of the agent so long as I don’t enter their freedom bubbles. In the traffic example, how the neighborhood is constructed should be irrelevant to McTraffic, all it needs to know is a) there are other agents present in the neighborhood already, and b) it wants to change the nature of the neighborhood, which will enter the other agent’s freedom bubbles. Therefore it needs to to negotiate with the inhabitants (so yes, at this step there’s an inference via dialogue going on).