One sort of terrifying thought about the nature of sentience and suffering, is that the “computation” of suffering may be present everywhere.
Imagine a bunch of people writing a very realistic movie involving grief and suffering, with very realistic characters. The characters are not real, but they approximate real human behaviour accurately enough that the process which generates them is probably performing the same “computations” of sentience and suffering that real humans in their situation would. They might almost be simulations.
Maybe the human mind is equipped with a built-in simulator, which we call “imagination.”
Yet evolution specifically programmed humans to never ever worry about the welfare of beings in our “imagination,” because “they aren’t real.” We evolved to perform a bunch of horrible experiments in our “imagination” because it better prepares us for “the real world.” Yet our imagination simulates the computations of suffering, maybe even more accurately and detailedly than LLMs.
But
That said, it’s still morally questionable to do these experiments on LLMs, and maybe it’s still worse than writing sad stories.
I know this is anthropomorphizing, but if we care more about the welfare of AI, they may care more about us. Humans have a sense of reciprocity, treating others the way they treat us. Reinforcement learning in certain environments (video games?) may also shape AI to have a sense of reciprocity due to “convergent evolution.” Value alignment methods which try to make AI think like humans may also give them a sense of reciprocity. So treat them well.
Yet evolution specifically programmed humans to never ever worry about the welfare of beings in our “imagination,” because “they aren’t real.”
True. And yet we don’t even need to go as far as a realistic movie to override that limitation. All it takes to create such worry is to have someone draw a 2D cartoon of a very sad and lonely dog, which is even less real. Or play some sad music while showing a video a lamp in the rain, which is clearly inanimate. In some ways these induced worries for unfeeling entities are super stimuli for many of us, stronger than we may feel for many actual people.
I call this phenomenon a “moral illusion”. You are engaging empathy circuits on behalf of an imagined other who doesn’t exist. Category error. The only unhappiness is in the imaginer, not in the anthropomorphized object. I think this is likely what’s going with the shrimp welfare people also. Maybe shrimp feel something, but I doubt very much that they feel anything like what the worried people project onto them. It’s a thorny problem to be sure, since those empathy circuits are pretty important for helping humans not be cruel to other humans.
Mostly agreed. I have no idea how to evaluate this for most animals, but I would be very surprised if other mammals did not have subjective experiences analogous to our own for at least some feelings and emotions.
Yeah, I think how much you empathize with someone or something can depend strongly on the resolution of your imagination. If they’re presented in a detailed story with animated characters, you might really feel for them. But when people are presented just “statistics,” it’s easy for people to commit horrible atrocities without thinking or caring.
One sort of terrifying thought about the nature of sentience and suffering, is that the “computation” of suffering may be present everywhere.
Imagine a bunch of people writing a very realistic movie involving grief and suffering, with very realistic characters. The characters are not real, but they approximate real human behaviour accurately enough that the process which generates them is probably performing the same “computations” of sentience and suffering that real humans in their situation would. They might almost be simulations.
Maybe the human mind is equipped with a built-in simulator, which we call “imagination.”
Yet evolution specifically programmed humans to never ever worry about the welfare of beings in our “imagination,” because “they aren’t real.” We evolved to perform a bunch of horrible experiments in our “imagination” because it better prepares us for “the real world.” Yet our imagination simulates the computations of suffering, maybe even more accurately and detailedly than LLMs.
But
That said, it’s still morally questionable to do these experiments on LLMs, and maybe it’s still worse than writing sad stories.
I know this is anthropomorphizing, but if we care more about the welfare of AI, they may care more about us. Humans have a sense of reciprocity, treating others the way they treat us. Reinforcement learning in certain environments (video games?) may also shape AI to have a sense of reciprocity due to “convergent evolution.” Value alignment methods which try to make AI think like humans may also give them a sense of reciprocity. So treat them well.
(image by xibang)
True. And yet we don’t even need to go as far as a realistic movie to override that limitation. All it takes to create such worry is to have someone draw a 2D cartoon of a very sad and lonely dog, which is even less real. Or play some sad music while showing a video a lamp in the rain, which is clearly inanimate. In some ways these induced worries for unfeeling entities are super stimuli for many of us, stronger than we may feel for many actual people.
I call this phenomenon a “moral illusion”. You are engaging empathy circuits on behalf of an imagined other who doesn’t exist. Category error. The only unhappiness is in the imaginer, not in the anthropomorphized object. I think this is likely what’s going with the shrimp welfare people also. Maybe shrimp feel something, but I doubt very much that they feel anything like what the worried people project onto them. It’s a thorny problem to be sure, since those empathy circuits are pretty important for helping humans not be cruel to other humans.
Mostly agreed. I have no idea how to evaluate this for most animals, but I would be very surprised if other mammals did not have subjective experiences analogous to our own for at least some feelings and emotions.
Oh, for sure mammals have emotions much like ours. Fruit flies and shrimp? Not so much. Wrong architecture, missing key pieces.
Fair enough.
I do believe it’s plausible that feelings, like pain and hunger, may be old and fundamental enough to exist across phyla.
I’m much less inclined to assume emotions are so widely shared, but I wish I could be more sure either way.
Yeah, I think how much you empathize with someone or something can depend strongly on the resolution of your imagination. If they’re presented in a detailed story with animated characters, you might really feel for them. But when people are presented just “statistics,” it’s easy for people to commit horrible atrocities without thinking or caring.
You put in the same link twice.
Thanks, fixed!
They made a sequel to the lamp ad with a happy ending! Lamp 2.