>variables in the real world are rarely completely independent
To some extent, the diminishing returns of investing the agent’s “budget” captures this non-independence dynamic (increasing one variable must reduce some other, because there is less budget to go along). More complicated trade-offs seem to be modelleable in a similar way.
>Secondly, how does this model deal with adversarial agents?
It doesn’t, not really.
>Finally, how well does this model deal with the fact that human values might change over time?
It doesn’t; those are more advanced considerations; see eg https://www.lesswrong.com/posts/Y2LhX3925RodndwpC/resolving-human-values-completely-and-adequately