The issue of rigidity is broad and important topic which has been insufficiently addressed on this site. A ‘rigid’ AI cannot be considered rational, because all rational beings are aware that their reasoning processes are prone to error. I would go on further to say that a rigid FAI can be just as dangerous (in the long-term) as a paperclip maximizer. However, the problem of implementing a ‘flexible’ AI would indeed be difficult. Such an AI would be a true inductive agent—even its confidence in the solidity of mathematical proof would be based on empirical evidence. Thus it would be difficult to predict how such an AI might function—there is a risk that the AI would ‘go insane’ as it loses confidence in the validity of the core assumptions underlying its cognitive processes. But this is already taking us far afield of the original subject of discussion.
+1 great explanation.
The issue of rigidity is broad and important topic which has been insufficiently addressed on this site. A ‘rigid’ AI cannot be considered rational, because all rational beings are aware that their reasoning processes are prone to error. I would go on further to say that a rigid FAI can be just as dangerous (in the long-term) as a paperclip maximizer. However, the problem of implementing a ‘flexible’ AI would indeed be difficult. Such an AI would be a true inductive agent—even its confidence in the solidity of mathematical proof would be based on empirical evidence. Thus it would be difficult to predict how such an AI might function—there is a risk that the AI would ‘go insane’ as it loses confidence in the validity of the core assumptions underlying its cognitive processes. But this is already taking us far afield of the original subject of discussion.