Suppose I build a deterministic agent which has a value function in the most literal sense, ie. it has to call the function to get the values of various alternative actions in order to make a decision about which to perform. Would you still say it has no use for value judgements?
An agent, an entity that acts, cannot say “what will be, will be”, because it makes decisions, and because the decisions it makes are a component of the future. If it does not know the decision it will make before it makes it, it will be in a state of subjective uncertainty about the future. Subjective uncertainty and objective deyetminism are quite compatible.
I think it is possible that you are being misled by fictional evidence. In Arrival, the Heptapods knowledge of the future is a straightforward extension of a fixed future, but everything we know indicates considerable barriers between determinism and foreknowledge