I do agree that AI, who is underdeveloped in terms of its goals and allowed to exist, is too likely to become an ethical and/or existential catastrophe, but have a few questions.
If neurosurgery and psychology develop sufficiently, is it ethically okay to align humans (or newborn) to other, more primitive life forms to the extent we want to align AI to humanity (I didn’t say “the same way”, because human brain seems to be differently organized than programmable computers, but I mean practically the same behaviour and/or goals change)?
Does anyone, mentioning that AI would become more intelligent than whole human civilization, think that AI would be, therefore, more valuable than humanity? Shouldn’t AI goals be set with consideration of that? If not, isn’t answer for 1) “yes”?
Your link is broken.
Well, cultural relativity is a fact, as there are no morality and people either justify any of their actions via tradition, or simply follow it when they don’t want to think. Universal life rights would be great (no less than human rights, at least. I’m one personality legalist and one personality ecocentrist who wants sentience to remain in order to save biosphere from geological and astronomical events that are coming sooner than new Homo sapiens may emerge through evolution if current one is extinct before making AGI) Everything else, I upvote.