Oh, sorry—I didn’t mean to imply otherwise. It’s GREAT if most people act in consistent, stable, legible ways, and one of the easy paths to encourage that is to pretend there’s some truth behind the common hallucinations. This goes for morals, money, personal rights, and probably many other things. I LIKE many parts of the current equilibrium, and don’t intend to tear it down. But I recognize internally (and in theoretical discussions with folks interested in decision theory and such) that there’s no truth to be had, only traditions and heuristics.
This means there is no way to answer the original question “What would make an AI a valid moral patient”. Fundamentally, it would take common societal acceptance, which probably comes from many common interactions with many people.
Oh, sorry—I didn’t mean to imply otherwise. It’s GREAT if most people act in consistent, stable, legible ways, and one of the easy paths to encourage that is to pretend there’s some truth behind the common hallucinations. This goes for morals, money, personal rights, and probably many other things. I LIKE many parts of the current equilibrium, and don’t intend to tear it down. But I recognize internally (and in theoretical discussions with folks interested in decision theory and such) that there’s no truth to be had, only traditions and heuristics.
This means there is no way to answer the original question “What would make an AI a valid moral patient”. Fundamentally, it would take common societal acceptance, which probably comes from many common interactions with many people.