Thanks for the answers!
We should categorise things as goal-directed agents if it scores highly on most of these criteria, not just if it scores perfectly on all of them. So I agree that you don’t need one goal forever, but you do need it for more than a few minutes. And internal unification also means that the whole system is working towards this.
If coherence is about having the same goal for a “long enough” period of time, then it makes sense to me.
By “sensitive” I merely mean that differences in expected long-term or large-scale outcomes sometimes lead to differences in current choices.
So the think that judges outcomes in the goal-directed agent is “not always privileging short-term outcomes”? Then I guess it’s also a scale, because there’s a big difference between a system that has one case where it privileges long-term outcomes over short-term ones, and a system that focuses on long-term outcomes.
Yeah, I think there’s still much more to be done to make this clearer. I guess my criticism of mesa-optimisers was that they talked about explicit representation of the objective function (whatever that means). Whereas I think my definition relies more on the values of choices being represented. Idk how much of an improvement this is.
I agree that the explicit representation of the objective is weird. But on the other hand, it’s an explicit and obvious weirdness, that either calls for clarification or changes. Whereas in your criteria, I feel that essentially the same idea is made implicit/less weird, without actually bringing a better solution. Your approach might be better in the long run, possible because rephrasing the question in these terms lets us find a non weird way to define this objective.
I just wanted to point out that in our current state of knowledge, I feel like there are drawbacks in “hiding” the weirdness like you do.
I don’t really know what it means for something to be a utility function. I assume you could interpret it that way, but my definition of goals also includes deontological goals, which would make that interpretation harder. I like the “equivalence classes” thing more, but I’m not confident enough about the space of all possible internal concepts to claim that it’s always a good fit.
One idea I had for defining goals is as a temporal logic property (for example in LTL) on states. That lets you express things like “I want to reach one of these states” or “I never want to reach this state”; the latter looks like a deontological proprety to me. Thinking some more about this led me see two issues:
First, it doesn’t let you encode preferences of some state over another. That might be solvable by adding an partial order with nice properties, like Stuart Armstrong’s partial preferences.
Second, the system doesn’t have access to the states of the world, it has access to its abstractions of those states. Here we go back to the equivalence classes idea. Maybe a way to cash in your internal abstractions and Paul’s ascriptions of beliefs is through an equivalence relation on the states of the world, such that the goal of the system is defined on the equivalence classes for this relation.
I expect that asking “what properties do these utility functions have” will be generally more misleading than asking “what properties do these goals have”, because the former gives you an illusion of mathematical transparency. My tentative answer to the latter question is that, due to Moravec’s paradox, they will have the properties of high-level human thought more than they have the properties of low-level human thought. But I’m still pretty confused about this.
Agreed that the first step should be the properties of goals. I just also believe that if you get some nice properties of goals, you might know what constraints to add to utility functions to make them more “goal-like”.
Your last sentence seems contradictory with what you wrote about Dennett. Like I understand it as you saying “goals would be like high level human goals”, while your criticism of Dennett was that the intentional stance doesn’t necessarily works on NNs because they don’t have to have the same kind of goals than us. Am I wrong about one of those opinions?