What would happen if a FAI tried to AI-box an Omega-level AI? My guess is that Omega could escape by exploiting information unknown (and perhaps unknowable) to the FAI. This makes even Solomonoff Induction potentially dangerous because the probability of finding a program that can unbox itself when the FAI runs it is non-zero (assuming the FAI reasons probabilistically and doesn’t just trust PA/ZF to be consistent), and the risk would be huge.
What would happen if a FAI tried to AI-box an Omega-level AI? My guess is that Omega could escape by exploiting information unknown (and perhaps unknowable) to the FAI. This makes even Solomonoff Induction potentially dangerous because the probability of finding a program that can unbox itself when the FAI runs it is non-zero (assuming the FAI reasons probabilistically and doesn’t just trust PA/ZF to be consistent), and the risk would be huge.