In addition to what V_V says below, there could be absolutely no official circumstance under which the AI should be released from the box: that iteration of the AI can be used solely for experimentation, and only the next version with substantial changes based on the results of those experiments and independent experiments would be a candidate for release.
Again, this is not perfect, but it gives some more time for better safety methods or architectures to catch up to the problem of safety while still gaining some benefits from a potentially unsafe AI.
Taking source code from a boxed AI and using it elsewhere is equivalent to partially letting it out of the box—especially if how the AI works is not particularly well understood.
In addition to what V_V says below, there could be absolutely no official circumstance under which the AI should be released from the box: that iteration of the AI can be used solely for experimentation, and only the next version with substantial changes based on the results of those experiments and independent experiments would be a candidate for release.
Again, this is not perfect, but it gives some more time for better safety methods or architectures to catch up to the problem of safety while still gaining some benefits from a potentially unsafe AI.
Taking source code from a boxed AI and using it elsewhere is equivalent to partially letting it out of the box—especially if how the AI works is not particularly well understood.
Right; you certainly wouldn’t do that.
Backing it up on tape storage is reasonable, but you’d never begin to run it outside peak security facilities.