You have correctly identified that giving a corrigible superintelligence to most people will result in doom. This is why I think it’s vital that power over superintelligence be kept in the hands of a benevolent governing body. And yes, since this is probably an impossible ask, I think we should basically shut down AI development until we figure out how to select for benevolence and wisdom.
Still, I think corrigibility is a better strategy than the approaches currently being taken by frontier labs (which are even more doomed).
You have correctly identified that giving a corrigible superintelligence to most people will result in doom. This is why I think it’s vital that power over superintelligence be kept in the hands of a benevolent governing body. And yes, since this is probably an impossible ask, I think we should basically shut down AI development until we figure out how to select for benevolence and wisdom.
Still, I think corrigibility is a better strategy than the approaches currently being taken by frontier labs (which are even more doomed).