Should We Have Faith for AIs to Become Benevolent Gods?

There are a lot of problems with our current world that are larger than what smart policies or public awareness campaigns could solve on their own:

  • The Loneliness Epidemic—both sexes are affected by this, contrary to popular online narratives. It is the result of a combination of factors, such as intensive schooling, social media (especially scroll-only social media such as Tiktok), the lack of real-world interactions, and monetary incentives to fuel a narrative that positions males and females against each other.

  • Schooling, which is specifically noticeable in East Asian countries such as China, Japan, and Korea. The school system posits students against each other to compete in scoring as high as possible in rigid tests that favors students that are good at memorization far more than students that have outside-the-box thinking and problem solving skills.

  • The corrosion of Western style democracy: culture wars and populism fuels a deep divide between ideologies, and Western countries are more and more willing to vote for candidates that run on populist platforms instead of being factually competent at policy making.

Therefore, it is understandable as to why the current leaders in the AI/​ML space are forecasting/​wishing for AI development endgames where the AI that we develop help solve our problems, deus ex machina style.

For example, Dario Amodei, the CEO of Anthropic, published his opinion in the blog post Machines of Loving Grace, where the endgame is basically “AIs go off and do their own thing, while humans get to go back to what our brain’s biology is best suited for: mother nature”. AI 2027, a more forecast-oriented report, specifically have one of its two projected outcomes of AI development as “An aligned AI takes over and makes a deal with any misaligned AIs, while humans are inevitably going to be left out of the loop”.

Personally, I am wishful for such outcomes, since the problems that I’ve listed seems like another new flavor of a generational (or even multi-generational) growing pain that the human species had endured and suffered from many times in the past. Yes, we can wait it out and help solve it as a community, but it would take time, and more people would suffer than most of us would ever like to admit if we choose this option.

This is a topic that I see the LessWrong community lacks engagement in, as it is not only pivotal for our entire understanding of AI development, but also deeply rooted in the AI/​ML community’s collective vision for our future.

No comments.