Human Nature, ASI alignment and Extinction

I was reflecting on the human nature and how our very biological programming might drive us to extinction. According to my limited knowledge about anthropology and human evolution, I feel that most of us are kind of wired to try to look up for our closest relatives and ourselves, but not for humans as a species. So when it comes to developing potentially dangerous technologies such as ASI (Artificial Super Intelligence), I simply cannot see the ones building it, slowing it down, or taking the safety measures required to avoid a misaligned Super Intelligence. I don’t see that because of how human nature (supposedly) works. People like Sam Altman, Elon Musk, Donald Trump, and other relevant actors in the ASI development, are (supposedly) more concerned with their own future than with humanity’s as a whole. That seems pretty stupid, but as I see it, they, as humans, don’t have the instinct to put humankind’s well-being before their own.

So, are we doomed? Even if most humans have that tendency, I also thinks that there are certain individuals who are willing to sacrifice power, status, and money to protect humanity. An example is Geoffrey Hinton, Ilya Sutskever, and others, who have left the development of ASI to focus on AI Safety. I think it all depends on how many of these more humankind-conscious actors come across to the mainstage, that we can go on with a path that does not end up in a catastrophe.