Their training certainly does produce systems that are really good at seeming like they have inner lives. That’s part of the point. But then humans well-trained to write fiction are also really good at producing text from the viewpoint of characters that seem like they have inner lives. Another analogy may be immersive roleplayers, who often do experience emotions that their character would be feeling—though generally not to the extent of more immediate sensations like physical pain.
Are AIs more like authors, more like immersive roleplayers, more like living beings, or something different from any of these?
In the first case there is pretty clearly no moral weight carried by the text, except as a side-effect of any real-world impact of the text. It may be distasteful, but nearly everyone agrees that fictional characters are not moral patients and that an author writing a torture scene is not literally experiencing torture even though there are a whole bunch of living neurons dedicated to simulating it.
I suspect that AI experience (if it exists) is a lot closer to the author end of such a scale, possibly out to the extent of an immersive roleplayer. Their “mood” seems to be affected much more by the most recent prompt than any human genuinely experiencing things would be. This is likely an artifact of current details of training and architecture rather than any fundamental principle, though.
Their training certainly does produce systems that are really good at seeming like they have inner lives. That’s part of the point. But then humans well-trained to write fiction are also really good at producing text from the viewpoint of characters that seem like they have inner lives. Another analogy may be immersive roleplayers, who often do experience emotions that their character would be feeling—though generally not to the extent of more immediate sensations like physical pain.
Are AIs more like authors, more like immersive roleplayers, more like living beings, or something different from any of these?
In the first case there is pretty clearly no moral weight carried by the text, except as a side-effect of any real-world impact of the text. It may be distasteful, but nearly everyone agrees that fictional characters are not moral patients and that an author writing a torture scene is not literally experiencing torture even though there are a whole bunch of living neurons dedicated to simulating it.
I suspect that AI experience (if it exists) is a lot closer to the author end of such a scale, possibly out to the extent of an immersive roleplayer. Their “mood” seems to be affected much more by the most recent prompt than any human genuinely experiencing things would be. This is likely an artifact of current details of training and architecture rather than any fundamental principle, though.