Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument

I’ve put a preprint up on arXiv that this community might find relevant. It’s an argument from over a year ago, so it may be dated. I haven’t been keeping up with the field much since I wrote it, so I welcome any feedback especially on where the crux of the AI risk debate has moved since the publication of Bostrom’s Superintelligence book.

Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument

In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become “superintelligent” and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent’s ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage.

As I hope is clear from the argument, the point of the article is to suggest that to the extent AI risk is a problem, we should shift our focus away from AI theory and more towards addressing questions of how we socially organize data collection and retention.