Seems like the concept of “coherence” used here is inclined to treat simple stimulus-response behavior as highly coherent. e.g., The author puts a thermostat in the supercoherent unintelligent corner of one of his graphs.
But stimulus-response behavior, like a blue-minimizing robot, only looks like coherent goal pursuit in a narrow set of contexts. The relationship between its behavioral patterns and its progress towards goals is context-dependent, and will go off the rails if you take it out of the narrow set of contexts where it fits. That’s not “a hot mess of self-undermining behavior”, so it’s not the lack-of-coherence that this question was designed to get at.
Here’s a hypothesis about the inverse correlation arising from your observation: When we evaluate a thing’s coherence, we sample behaviours in environments we expect to find the thing in. More intelligent things operate in a wider variety of environments, and the environmental diversity leads to behavioural diversity that we attribute to a lack of coherence.
Without thinking about it too much, this fits my intuitive sense. An amoeba can’t possibly demonstrate a high level of incoherence because it simply can’t do a lot of things, and whatever it does would have to be very much in line with its goal (?) of survival and reproduction.
More intelligent agents have a larger set of possible courses of action that they’re potentially capable of evaluating and carrying out. But picking an option from a larger set is harder than picking an option from a smaller set. So max performance grows faster than typical performance as intelligence increases, and errors look more like ‘disarray’ than like ‘just not being capable of that’. e.g. Compare a human who left the window open while running the heater on a cold day, with a thermostat that left the window open while running the heater.
A Second Hypothesis: Higher intelligence often involves increasing generality—having a larger set of goals, operating across a wider range of environments. But that increased generality makes the agent less predictable by an observer who is modeling the agent as using means-ends reasoning, because the agent is not just relying on a small number of means-ends paths in the way that a narrower agent would. This makes the agent seem less coherent in a sense, but that is not because the agent is less goal-directed (indeed, it might be more goal-directed and less of a stimulus-response machine).
These seem very relevant for comparing very different agents: comparisons across classes, or of different species, or perhaps for comparing different AI models. Less clear that they would apply for comparing different humans, or different organizations.
Seems like the concept of “coherence” used here is inclined to treat simple stimulus-response behavior as highly coherent. e.g., The author puts a thermostat in the supercoherent unintelligent corner of one of his graphs.
But stimulus-response behavior, like a blue-minimizing robot, only looks like coherent goal pursuit in a narrow set of contexts. The relationship between its behavioral patterns and its progress towards goals is context-dependent, and will go off the rails if you take it out of the narrow set of contexts where it fits. That’s not “a hot mess of self-undermining behavior”, so it’s not the lack-of-coherence that this question was designed to get at.
Here’s a hypothesis about the inverse correlation arising from your observation: When we evaluate a thing’s coherence, we sample behaviours in environments we expect to find the thing in. More intelligent things operate in a wider variety of environments, and the environmental diversity leads to behavioural diversity that we attribute to a lack of coherence.
Without thinking about it too much, this fits my intuitive sense. An amoeba can’t possibly demonstrate a high level of incoherence because it simply can’t do a lot of things, and whatever it does would have to be very much in line with its goal (?) of survival and reproduction.
A hypothesis for the negative correlation:
More intelligent agents have a larger set of possible courses of action that they’re potentially capable of evaluating and carrying out. But picking an option from a larger set is harder than picking an option from a smaller set. So max performance grows faster than typical performance as intelligence increases, and errors look more like ‘disarray’ than like ‘just not being capable of that’. e.g. Compare a human who left the window open while running the heater on a cold day, with a thermostat that left the window open while running the heater.
A Second Hypothesis: Higher intelligence often involves increasing generality—having a larger set of goals, operating across a wider range of environments. But that increased generality makes the agent less predictable by an observer who is modeling the agent as using means-ends reasoning, because the agent is not just relying on a small number of means-ends paths in the way that a narrower agent would. This makes the agent seem less coherent in a sense, but that is not because the agent is less goal-directed (indeed, it might be more goal-directed and less of a stimulus-response machine).
These seem very relevant for comparing very different agents: comparisons across classes, or of different species, or perhaps for comparing different AI models. Less clear that they would apply for comparing different humans, or different organizations.