We’ve been struggling with natural consciousnesses, both human and animal, for a long long time, and it’s not obvious to me that artificial consciousness can avoid any of that pain.
You’re right, but there are a couple of important differences:
There is widespread agreement on the status of many animals. People believe most tetrapods are conscious. The terrible stuff we do to them is done in spite of this.
We have a special opportunity at the start of our interactions with AI systems to decide how we’re going to relate to them. It is better to get things right off the bat then to try to catch up (and shift public opinion) decades later.
We have a lot more potential control over artificial systems than we do over natural creatures. It is possible that very simple changes and low-cost changes could make a huge difference to their welfare (or whether they have any.)
I’m not particularly worried that we may harm AIs that do not have valenced states, at least in the near term. The issue is more over precedent and expectations going forward. I would worry about a future in which we create and destroy conscious systems willy-nilly because of how it might affect our understanding of our relationship to them, and ultimately to how we act toward AIs that do have morally relevant states. These worries are nebulous, and I very well might be wrong to be so concerned, but it feels risky to rush into things.