The given reason is paranoia. If you are concerned that a runaway machine intelligence might accidentally obliterate all sentient life, then a machine that can shut itself down has gained a positive safety feature.
In practice, I don’t think we will have to build machines that regularly shut down. Nobody regularly shuts down Google. The point is that—if we seriously think that there is a good reason to be paranoid about this scenario—then there is a defense that is much easier to implement than building a machine intelligence which has assimilated all human values.
I think this dramatically reduces the probability of the “runaway machine accidentally kills all humans” scenario.
The given reason is paranoia. If you are concerned that a runaway machine intelligence might accidentally obliterate all sentient life, then a machine that can shut itself down has gained a positive safety feature.
In practice, I don’t think we will have to build machines that regularly shut down. Nobody regularly shuts down Google. The point is that—if we seriously think that there is a good reason to be paranoid about this scenario—then there is a defense that is much easier to implement than building a machine intelligence which has assimilated all human values.
I think this dramatically reduces the probability of the “runaway machine accidentally kills all humans” scenario.