Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we’re actionably and emotionally confident in the consciousness of the humans that surround us every day. We can’t experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.
An AGI presents new challenges, but we’ve already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of ‘conscious’ label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.
This greatly clarified the distinction for me. Well done.