Why not just write failsafe rules into the superintelligent machine?

Many peo­ple think you can solve the Friendly AI prob­lem just by writ­ing cer­tain failsafe rules into the su­per­in­tel­li­gent ma­chine’s pro­gram­ming, like Asi­mov’s Three Laws of Robotics. I thought the re­but­tal to this was in “Ba­sic AI Drives” or one of Yud­kowsky’s ma­jor ar­ti­cles, but af­ter skim­ming them, I haven’t found it. Where are the ar­gu­ments con­cern­ing this sug­ges­tion?