“Why can’t you just turn it off?”

If you’re so worried about AI risk, why don’t you just turn off the AI when you think it’s about to do something dangerous?

On Friday, Members of the OpenAI board including Ilya Sutskever decided that they wanted to “turn off” OpenAI’s rapid push towards smarter-than-human AI by firing CEO Sam Altman.

The result seems to be that the AI won. The board has backed down after Altman rallied staff into a mass exodus. There’s an implied promise of riches from the AI to those who develop it more quickly, and people care a lot about money and not much about small changes in x-risk. Of course this is a single example, but it is part of a pattern of people wanting to reap localized rewards from AI—recently the UK said it will refrain from regulating AI ‘in the short term’, EU countries started lobbying to have foundation models excluded from regulation.

That is why you cannot just turn it off. People won’t want to turn it off[1].



  1. ↩︎

    There is a potential counterargument that once it becomes clear that AI is very dangerous, people will want to switch it off. But there is a conflicting constraint that it must also be possible to switch if off at that time. At early times, people may not take the threat seriously, and at late times they may take it seriously but not be able to switch it off because the AI is too powerful.