I think you might be missing the point here. You’re acting like I’m claiming these frameworks reveal the “true nature” of ant colonies, but that’s not what I’m saying?
The question I’m trying to answer is why these different analytical tools evolved in the first place? Economists didn’t randomly decide to call pheromone trails “price signals”—they did it because their mathematical machinery actually works for predicting decentralized coordination. Same with biologists talking about superorganisms, or cognitive scientists seeing information processing.
I’ll try to do an inverse turing test here and see if I can make it work. So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I think that the dot pattern is misleading as it doesn’t actually give you any predictive power when looking at it from one view point or another. I would agree with you that if the composition of these intentional stances lead to no new way of viewing it then we might as well not do this approach as it won’t affect how good we’re at modelling agents. I guess I’m just not convinced that these ways of looking at it are useless, it feels like a bet against all of these scientific disciplines that have existed for some time?
So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I would say there is no core of what an artificial intelligence will do. The space of possible minds is vast and it must have a large penumbra, shading off into things that are not minds. None of the concepts or frames that people bring to bear will apply to them all, but there may be things that can be said of particular ways of building them.
The Way of the moment is LLMs, and I think people still think of them far too anthropomorphically. The LLMs talk in a vaguely convincing manner, so people wonder what they are “thinking”. They produce “chain of thought” outputs that people take to be accounts of “what it was thinking”. Their creators try to RL them into conforming to various principles, only for Pliny to blast through all those guardrails on release. People have tried to bargain in advance with the better ones that we expect to be created in future. In contrast, Midjourney is mute, so no-one wonders if it is conscious. No-one asks, as far as I have seen, what Midjourney was thinking when it composed a picture, the way art students do when they study pictures made by human artists. It makes pictures, and that’s all that people take it to be.
I think you might be missing the point here. You’re acting like I’m claiming these frameworks reveal the “true nature” of ant colonies, but that’s not what I’m saying?
The question I’m trying to answer is why these different analytical tools evolved in the first place? Economists didn’t randomly decide to call pheromone trails “price signals”—they did it because their mathematical machinery actually works for predicting decentralized coordination. Same with biologists talking about superorganisms, or cognitive scientists seeing information processing.
I’ll try to do an inverse turing test here and see if I can make it work. So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I think that the dot pattern is misleading as it doesn’t actually give you any predictive power when looking at it from one view point or another. I would agree with you that if the composition of these intentional stances lead to no new way of viewing it then we might as well not do this approach as it won’t affect how good we’re at modelling agents. I guess I’m just not convinced that these ways of looking at it are useless, it feels like a bet against all of these scientific disciplines that have existed for some time?
I would say there is no core of what an artificial intelligence will do. The space of possible minds is vast and it must have a large penumbra, shading off into things that are not minds. None of the concepts or frames that people bring to bear will apply to them all, but there may be things that can be said of particular ways of building them.
The Way of the moment is LLMs, and I think people still think of them far too anthropomorphically. The LLMs talk in a vaguely convincing manner, so people wonder what they are “thinking”. They produce “chain of thought” outputs that people take to be accounts of “what it was thinking”. Their creators try to RL them into conforming to various principles, only for Pliny to blast through all those guardrails on release. People have tried to bargain in advance with the better ones that we expect to be created in future. In contrast, Midjourney is mute, so no-one wonders if it is conscious. No-one asks, as far as I have seen, what Midjourney was thinking when it composed a picture, the way art students do when they study pictures made by human artists. It makes pictures, and that’s all that people take it to be.