There’s a whole part of the argument which is missing which is the framing of this as being about AI risk. I’ve seen various propositions for why this happened, and the board being worried about AI risk is one of them but not the most plausible afaict.
In addition this is phrased similarly to technical problems like the corrigibility, which it is very much not about. People who say “why can’t you just turn it off” typically refer to literally turning off the AI if it appears to be dangerous, which this is not about. This is about turning off the AI company, not the AI.
There’s a whole part of the argument which is missing which is the framing of this as being about AI risk.
I’ve seen various propositions for why this happened, and the board being worried about AI risk is one of them but not the most plausible afaict.
In addition this is phrased similarly to technical problems like the corrigibility, which it is very much not about.
People who say “why can’t you just turn it off” typically refer to literally turning off the AI if it appears to be dangerous, which this is not about. This is about turning off the AI company, not the AI.