In a sense, the AI “in the box” is not really boxed
I meant the “AI Box” scenario where it is printing results to a screen in the outside world. I do think BoMAI is truly boxed.
We cannot “prove” that something is physically impossible, only that it is impossible under some model of physics.
Right, that’s more or less what I mean to do. We can assign probabilities to statements like “it is physically impossible (under the true models of physics) for a human or a computer in isolation with an energy budget of x joules and y joules/second to transmit information in any way other than via a), b), or c) from above.” This seems extremely likely to me for reasonable values of x and y, so it’s still useful to have a “proof” even if it must be predicated on such a physical assumption.
Whoops—when I said
I meant the “AI Box” scenario where it is printing results to a screen in the outside world. I do think BoMAI is truly boxed.
Right, that’s more or less what I mean to do. We can assign probabilities to statements like “it is physically impossible (under the true models of physics) for a human or a computer in isolation with an energy budget of x joules and y joules/second to transmit information in any way other than via a), b), or c) from above.” This seems extremely likely to me for reasonable values of x and y, so it’s still useful to have a “proof” even if it must be predicated on such a physical assumption.