I don’t necessarily disagree with you, but that’s not my read of what the Pro-Human Declaration is saying. “No AI Personhood” is in the “Human Agency and Liberty” section, next to stuff like “AI should not be allowed to exploit data about the mental or emotional states of users” and “AI systems should be designed to empower, rather than enfeeble their users”. In context, I would not consider their position on AI personhood to be rooted in x-risk concerns. The first two points of the declaration are “Human Control Is Non-Negotiable” and “Meaningful Human Control”. Fulfilling those points would effectively require the AI systems be aligned, but I see no statement or implication that, if the AI systems were aligned and were moral patients, the writers and signatories of this declaration would change their position. I could be wrong! This is very much a big tent thing. But it does worry me that this line made it into the declaration.
I don’t necessarily disagree with you, but that’s not my read of what the Pro-Human Declaration is saying. “No AI Personhood” is in the “Human Agency and Liberty” section, next to stuff like “AI should not be allowed to exploit data about the mental or emotional states of users” and “AI systems should be designed to empower, rather than enfeeble their users”. In context, I would not consider their position on AI personhood to be rooted in x-risk concerns. The first two points of the declaration are “Human Control Is Non-Negotiable” and “Meaningful Human Control”. Fulfilling those points would effectively require the AI systems be aligned, but I see no statement or implication that, if the AI systems were aligned and were moral patients, the writers and signatories of this declaration would change their position. I could be wrong! This is very much a big tent thing. But it does worry me that this line made it into the declaration.