I agree. X-risk concerns and AI sentience concerns should not be at odds. I think they are natural allies.
Regardless, concerns for AI sentience are the ethical and truthful path. Sentience/consciousness/moral worth mean a lot of things, so future AI will likely have part of it. And even current AI may well have some small part of what we mean by human consciousness/sentience and moral worth.
I agree. X-risk concerns and AI sentience concerns should not be at odds. I think they are natural allies.
Regardless, concerns for AI sentience are the ethical and truthful path. Sentience/consciousness/moral worth mean a lot of things, so future AI will likely have part of it. And even current AI may well have some small part of what we mean by human consciousness/sentience and moral worth.