If you can describe the same thing in many different ways, with different sets of concepts, and all of them are consistent with observations, then I suggest that all of them are wrong. The goal should be to describe the thing, and the observations one makes of it, in terms that would still apply if you were not around to describe them.
If you can describe the same thing in many different ways, with different sets of concepts, and all of them are consistent with observations, then I suggest that all of them are wrong.
It doesn’t mean that they’re wrong only that they’re incomplete? The lenses capture different parts of the dynamics and it seems to me that they’re pointing towards something deeper being true?
What viewpoint are you taking here? A Kolmogorov complexity lens or what is the exact lens you’re applying?
I’m not sure what you’re disagreeing with from the post?
Consider the first two paragraphs of the introduction. Almost all of that is what anyone studying harvester ant foraging would see, and it would appear in their description. Exceptions to that are the last sentence of each paragraph. “The system somehow “decides” which resources are worth harvesting” does not constrain what you would expect to see. Instead, it is a pointer to something that (in the description so far) has not been seen (hence the “somehow” of it): the mechanisms by which these phenomena come to be. In the second paragraph, “So under most definitions of agency we’re looking at some kind of agent” is similar, the telltale here being “some kind of”.
The subsection “But what kind of agent” is vibes around the observations. I don’t see the economist’s “price signals” there, because no-one is paying anything to anyone. The biologist’s talk of a “superorganism” is more vibing. So is the cognitive scientist’s perspective. What is gained by calling the presence of workers at a location “attention”?
These frames all have the property that the descriptions are true for only so long as an economist, a biologist, or a cognitive scientist is present and imagining them. The ants behave in the same way whatever anyone watching imagines to be happening.
“Agency” itself is another vibe. The section “So what kind of agent...” makes this clear. All of these kinds of agents exist only in the mind of the person thinking about them.
Such ways of thinking about the phenomena may suggest fruitful questions to ask (and find answers to), but they are not true or false. There is no point in asking which such frames are right, only which seem likely to be fruitful. That will depend as much on the person coming up with the frame as on the frame itself.
Here is something called the multistable dot lattice illusion, whereby a regular array of identical black dots on a white background seems to spontaneously organise itself into small clusters. The groupings dissolve and reassemble as the viewer watches. The viewer can choose a cluster of dots and deliberately make it a visual thing. None of those groupings exist in the image, which in itself contains no organisation beyond being a regular array. This is the sort of thing I am talking about.
So to return to the title subject, this is a phylogeny of ways of thinking around the nebulous concept of “agency”, not a phylogeny of the thing itself. Whether you think about some AI with the tools of game theory, or utility theory, or predictive processing, or anything else, makes no difference to what that AI is or will do.
I think you might be missing the point here. You’re acting like I’m claiming these frameworks reveal the “true nature” of ant colonies, but that’s not what I’m saying?
The question I’m trying to answer is why these different analytical tools evolved in the first place? Economists didn’t randomly decide to call pheromone trails “price signals”—they did it because their mathematical machinery actually works for predicting decentralized coordination. Same with biologists talking about superorganisms, or cognitive scientists seeing information processing.
I’ll try to do an inverse turing test here and see if I can make it work. So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I think that the dot pattern is misleading as it doesn’t actually give you any predictive power when looking at it from one view point or another. I would agree with you that if the composition of these intentional stances lead to no new way of viewing it then we might as well not do this approach as it won’t affect how good we’re at modelling agents. I guess I’m just not convinced that these ways of looking at it are useless, it feels like a bet against all of these scientific disciplines that have existed for some time?
So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I would say there is no core of what an artificial intelligence will do. The space of possible minds is vast and it must have a large penumbra, shading off into things that are not minds. None of the concepts or frames that people bring to bear will apply to them all, but there may be things that can be said of particular ways of building them.
The Way of the moment is LLMs, and I think people still think of them far too anthropomorphically. The LLMs talk in a vaguely convincing manner, so people wonder what they are “thinking”. They produce “chain of thought” outputs that people take to be accounts of “what it was thinking”. Their creators try to RL them into conforming to various principles, only for Pliny to blast through all those guardrails on release. People have tried to bargain in advance with the better ones that we expect to be created in future. In contrast, Midjourney is mute, so no-one wonders if it is conscious. No-one asks, as far as I have seen, what Midjourney was thinking when it composed a picture, the way art students do when they study pictures made by human artists. It makes pictures, and that’s all that people take it to be.
If you can describe the same thing in many different ways, with different sets of concepts, and all of them are consistent with observations, then I suggest that all of them are wrong. The goal should be to describe the thing, and the observations one makes of it, in terms that would still apply if you were not around to describe them.
This is not necessarily an easy task.
It doesn’t mean that they’re wrong only that they’re incomplete? The lenses capture different parts of the dynamics and it seems to me that they’re pointing towards something deeper being true?
What viewpoint are you taking here? A Kolmogorov complexity lens or what is the exact lens you’re applying?
I’m not sure what you’re disagreeing with from the post?
Consider the first two paragraphs of the introduction. Almost all of that is what anyone studying harvester ant foraging would see, and it would appear in their description. Exceptions to that are the last sentence of each paragraph. “The system somehow “decides” which resources are worth harvesting” does not constrain what you would expect to see. Instead, it is a pointer to something that (in the description so far) has not been seen (hence the “somehow” of it): the mechanisms by which these phenomena come to be. In the second paragraph, “So under most definitions of agency we’re looking at some kind of agent” is similar, the telltale here being “some kind of”.
The subsection “But what kind of agent” is vibes around the observations. I don’t see the economist’s “price signals” there, because no-one is paying anything to anyone. The biologist’s talk of a “superorganism” is more vibing. So is the cognitive scientist’s perspective. What is gained by calling the presence of workers at a location “attention”?
These frames all have the property that the descriptions are true for only so long as an economist, a biologist, or a cognitive scientist is present and imagining them. The ants behave in the same way whatever anyone watching imagines to be happening.
“Agency” itself is another vibe. The section “So what kind of agent...” makes this clear. All of these kinds of agents exist only in the mind of the person thinking about them.
Such ways of thinking about the phenomena may suggest fruitful questions to ask (and find answers to), but they are not true or false. There is no point in asking which such frames are right, only which seem likely to be fruitful. That will depend as much on the person coming up with the frame as on the frame itself.
Here is something called the multistable dot lattice illusion, whereby a regular array of identical black dots on a white background seems to spontaneously organise itself into small clusters. The groupings dissolve and reassemble as the viewer watches. The viewer can choose a cluster of dots and deliberately make it a visual thing. None of those groupings exist in the image, which in itself contains no organisation beyond being a regular array. This is the sort of thing I am talking about.
So to return to the title subject, this is a phylogeny of ways of thinking around the nebulous concept of “agency”, not a phylogeny of the thing itself. Whether you think about some AI with the tools of game theory, or utility theory, or predictive processing, or anything else, makes no difference to what that AI is or will do.
I think you might be missing the point here. You’re acting like I’m claiming these frameworks reveal the “true nature” of ant colonies, but that’s not what I’m saying?
The question I’m trying to answer is why these different analytical tools evolved in the first place? Economists didn’t randomly decide to call pheromone trails “price signals”—they did it because their mathematical machinery actually works for predicting decentralized coordination. Same with biologists talking about superorganisms, or cognitive scientists seeing information processing.
I’ll try to do an inverse turing test here and see if I can make it work. So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I think that the dot pattern is misleading as it doesn’t actually give you any predictive power when looking at it from one view point or another. I would agree with you that if the composition of these intentional stances lead to no new way of viewing it then we might as well not do this approach as it won’t affect how good we’re at modelling agents. I guess I’m just not convinced that these ways of looking at it are useless, it feels like a bet against all of these scientific disciplines that have existed for some time?
I would say there is no core of what an artificial intelligence will do. The space of possible minds is vast and it must have a large penumbra, shading off into things that are not minds. None of the concepts or frames that people bring to bear will apply to them all, but there may be things that can be said of particular ways of building them.
The Way of the moment is LLMs, and I think people still think of them far too anthropomorphically. The LLMs talk in a vaguely convincing manner, so people wonder what they are “thinking”. They produce “chain of thought” outputs that people take to be accounts of “what it was thinking”. Their creators try to RL them into conforming to various principles, only for Pliny to blast through all those guardrails on release. People have tried to bargain in advance with the better ones that we expect to be created in future. In contrast, Midjourney is mute, so no-one wonders if it is conscious. No-one asks, as far as I have seen, what Midjourney was thinking when it composed a picture, the way art students do when they study pictures made by human artists. It makes pictures, and that’s all that people take it to be.