Really cool! This reminds me of Braitenberg vehicles.
I had a notion here that I could stochastically introduce a new goal that would minimize total suffering over an agent’s life-history. I tried this, and the most stable solution turned out to be thus: introduce an overwhelmingly aversive goal that causes the agent to run far away from all of its other goals screaming. Fleeing in perpetual terror, it will be too far away from its attractor-goals to feel much expected valence towards them, and thus won’t feel too much regret about running away from them. And it is in a sense satisfied that it is always getting further and further away from the object of its dread.
Interestingly, this seems somewhat similar to the reactions of severely traumatized people, whose senses partially shut down to make them stop feeling or wanting anything. And then there’s also suicide for when the “avoid suffering” goal grows too strong relative to the other ones. For humans there’s a counterbalancing goal of avoiding death, but your agents didn’t have an equivalent balancing desire to stay “alive” (or within reach of their other goals).