(The impossible item.) Given an agent program, define its preference.
If you have the agent’s program, you already have a pretty comprehensive model of it. It is harder to infer preferences from behaviour. That’s the problem Tim Freeman addressed here.
If you have the agent’s program, you already have a pretty comprehensive model of it. It is harder to infer preferences from behaviour. That’s the problem Tim Freeman addressed here.