One, do you think a “a homomorphic encrypted version of their best AI” is a viable thing?
Yes. See the reference. Even a 10 or 100x computation cost increase would be acceptable for top-level national security purposes like this.
Imagine that during the Cold War all the US and the USSR could know was whether one side’s nuclear arsenal was better than the other side’s.
That sounds very stabilizing to me. ‘We must prevent a missile gap!’
Which reference? I’m not talking about the millionaires’ problem, I’m talking about executing homomorphic code.
‘We must prevent a missile gap!’
One side thinks this and so accelerates the arms race. The other side thinks “This is our chance! We must strike while we know we’re ahead!” :-/
Yes. See the reference. Even a 10 or 100x computation cost increase would be acceptable for top-level national security purposes like this.
That sounds very stabilizing to me. ‘We must prevent a missile gap!’
Which reference? I’m not talking about the millionaires’ problem, I’m talking about executing homomorphic code.
One side thinks this and so accelerates the arms race. The other side thinks “This is our chance! We must strike while we know we’re ahead!” :-/