Why AI may not save the World

Link post

Marc Andreessen, founder of legendary VC firm a16z, recently wrote an article titled ‘Why AI Will Save the World’. This post is my reply to him and why I think his understanding of AI is superficial at best. I report quotes from his post in italic for the benefit of the reader, while all credits go to M. Andreessen himself. Please note, I am not an AI expert or developer, I am just passionate about the subject and take AI safety seriously. My day job is investing in private equity across the development cycle of companies.

I invite the reader to read Marc’s article in full to form their own views.

----------------------------------------------

I am MASSIVELY bullish on AI, to the point that I think it could make it possible to live almost indefinitely or a very long time (the end of times will eventually get us). While, I fully agree with Marc’s optimism, I find his understanding of the risks quite basic and his arguments too simplistic. He dismisses major risks with superficial claims.

“In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.”

  • It is very likely, in fact, that AI can develop goals independently of what humans code into the machine. At present we are simply unable to state that ‘AI cannot develop goals of its own’ and many notable figures in the field, researchers, philosophers, entrepreneurs, think it can (e.g. G. Hinton, Y. Bengio, E. Yudkowsky, N. Bostrom, E. Musk, etc.). NOBODY knows what truly happens inside of LLMs today, imagine when these models get 1,000 better… I do not think AI is evil and it may never even be conscious, but it will be smarter than all of us combined and controlling something which is thousand of times smarter than us is virtually impossible. AI goals can simply be orthogonal or unrelated to ours, with consequences which are hard to predict.

“And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation.”

  • Defining people who are worried about AI a ‘cult’ has more in common with gaslighting than a factual statement. Many people with diametrical opposite views about life and ethics agree that AI could be a threat to humanity. In fact, the very people who have built these systems and invested most in them acknowledge the massive risks associated with it. Many have recently signed the one-sentence statement. Geoffrey Hinton, Yoshua Bengio, the very creators of the foundations of current models are signatories, so are Ilya Sutskever, student of Hinton and genius behind OpenAI, Sam Altman, Bill Gates, Dario Amodei, the CEO of Anthropic, among many others.

“This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous [...] No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear.”

  • AI will automate jobs, like all technological innovations have throughout history. Will this spell the end of humanity? No, and the long term benefits will be great (e.g. we no longer have to work in the field), but adjustments in the short run (10-20 years) will be substantial and should be dealt with seriously.

“So what happens is the opposite of technology driving centralization of wealth – individual customers of the technology, ultimately including everyone on the planet, are empowered instead, and capture most of the generated value. As with prior technologies, the companies that build AI – assuming they have to function in a free market – will compete furiously to make this happen.”

  • AI will shift power increasingly in the hands of capital and those who control AI. I do not see how one can argue the opposite. Like any powerful tool, they who wield it will have a massive advantage. Think about big tech today and the power a handful of people have over our free time, elections, how we spend, likings, desires and so on. Having said that, new tools make all of us better off and in comparison to our ancestors, more powerful: I would much rather be myself today than King Louis XIV.

“First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws – I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use.”

  • His argument on having laws which criminalize most of the bad things one can do with AI is absolutely hilarious. Many would argue that the wars in Ukraine or Iraq are criminal actions, yet they happened. We have laws against killing and raping, but they happen regularly. Beyond that, we will likely need new laws for AI, just like we did following the advent of the digital age.

The truth is that it is simply impossible to stop the development of AI and the upside is so extremely positive, that it would be unwise to do so. Promoting the upside is good, but belittling the downside is plainly stupid (with all due respect to Marc). AI will most likely change our lives and society will take time to adapt, just like the internet.

AI could go rogue, if not at a global scale, at a local scale; I like to think humanity will manage it, like we always have. Intelligent regulation (promoting safety without hindering progress) is good, not bad, and necessary. I agree on his point that it is better for the West to win the AI race (from a Westerner point of view) and this point explains how fundamental and powerful controlling AI will be. It is worrying that someone in his position holds such views: he is either naïve or ignorant about the subject.

Two great (and long) articles which explain in detail the good and the bad about AI in layman’s terms and without bias are the following:

I am not saying AI should be banned and I am all for developing it, but it is paramount that it is done in a safe way and we fund companies which take safety issues seriously. Research in interpretability and alignment is key and our progress in these fields is still massively lacking AI models capabilities.

How can humanity develop AI safely? That is a story for another blog and E. Yudkowsky would say that is impossible. Perhaps he is right.

No comments.