It didn’t convince me that boxing is as unlikely to work as you suggest. What it mainly did is make me doubt the assumption that the AI has to use persuasion at all to escape, which I previously thought was very likely.
I may be overstated my doubts about boxing. It could be effective local and one-time solution, but not for millions AIs and decades. However, boxing of nuclear powerplants and bombs was rather effective to prevent large scale castarophes for around 70 years. (In case of Chernobyl the distance from large cities was a form of boxing).
Thank you.
The paper that most closely addresses my questions is this one: http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf which is linked from the Yampolsky paper you linked.
It didn’t convince me that boxing is as unlikely to work as you suggest. What it mainly did is make me doubt the assumption that the AI has to use persuasion at all to escape, which I previously thought was very likely.
I may be overstated my doubts about boxing. It could be effective local and one-time solution, but not for millions AIs and decades. However, boxing of nuclear powerplants and bombs was rather effective to prevent large scale castarophes for around 70 years. (In case of Chernobyl the distance from large cities was a form of boxing).