So if I’m understanding what you’re saying correctly, it is essentially that whether or not we have predictive processing, there’s a core of what an artificial intelligence will do that is not dependent on the frame itself. There’s some sort of underlying utility theory/decision theory/other view that correctly captures what an agent is that is not these perspectives?
I would say there is no core of what an artificial intelligence will do. The space of possible minds is vast and it must have a large penumbra, shading off into things that are not minds. None of the concepts or frames that people bring to bear will apply to them all, but there may be things that can be said of particular ways of building them.
The Way of the moment is LLMs, and I think people still think of them far too anthropomorphically. The LLMs talk in a vaguely convincing manner, so people wonder what they are “thinking”. They produce “chain of thought” outputs that people take to be accounts of “what it was thinking”. Their creators try to RL them into conforming to various principles, only for Pliny to blast through all those guardrails on release. People have tried to bargain in advance with the better ones that we expect to be created in future. In contrast, Midjourney is mute, so no-one wonders if it is conscious. No-one asks, as far as I have seen, what Midjourney was thinking when it composed a picture, the way art students do when they study pictures made by human artists. It makes pictures, and that’s all that people take it to be.
I would say there is no core of what an artificial intelligence will do. The space of possible minds is vast and it must have a large penumbra, shading off into things that are not minds. None of the concepts or frames that people bring to bear will apply to them all, but there may be things that can be said of particular ways of building them.
The Way of the moment is LLMs, and I think people still think of them far too anthropomorphically. The LLMs talk in a vaguely convincing manner, so people wonder what they are “thinking”. They produce “chain of thought” outputs that people take to be accounts of “what it was thinking”. Their creators try to RL them into conforming to various principles, only for Pliny to blast through all those guardrails on release. People have tried to bargain in advance with the better ones that we expect to be created in future. In contrast, Midjourney is mute, so no-one wonders if it is conscious. No-one asks, as far as I have seen, what Midjourney was thinking when it composed a picture, the way art students do when they study pictures made by human artists. It makes pictures, and that’s all that people take it to be.