My real concern would be the lack of understanding of what precautions are required, and how they were implemented.
If a corporation decided to enter the race for a true AI, then it wouldn’t be surprising if they got the AI researchers to work with zero safeguards while reassuring them that a separate team was managing the risk. This external team may well have a fantastic emergency protocol with all sorts of remote kill switches at power outlets, etc but if if a true AI was developed there is no guarantee it could be controlled by such external measures.
I just don’t believe that a corporation undertaking a project like this would understand the risk of this, or they would mistakenly believe they had the risk under complete control.
My real concern would be the lack of understanding of what precautions are required, and how they were implemented.
If a corporation decided to enter the race for a true AI, then it wouldn’t be surprising if they got the AI researchers to work with zero safeguards while reassuring them that a separate team was managing the risk. This external team may well have a fantastic emergency protocol with all sorts of remote kill switches at power outlets, etc but if if a true AI was developed there is no guarantee it could be controlled by such external measures.
I just don’t believe that a corporation undertaking a project like this would understand the risk of this, or they would mistakenly believe they had the risk under complete control.