Amplitude amplification is a particular algorithm that is designed for a particular problem, namely searching a database for good entries. It doesn’t mean you can arbitrarily amplify quantum amplitudes.
I suppose I haven’t heard before the idea of boxing an AI using the (yet to be developed) techniques that are necessary to stop quantum decoherence. Maybe there is some merit to this.
I think running an AI on the basis that we haven’t found an argument that it is unsafe sounds dangerous. Also, I still don’t see why an argument that an AI is unsafe would depend on a difficult mathematical problem.
How should I have worded the post to maximize merit?
Amplitude amplification is a particular algorithm
That’s Grover’s algorithm. Amplitude amplification is the generalization of the trick Grover’s algorithm uses, and applicable here.
I think running an AI on the basis that we haven’t found an argument that it is unsafe sounds dangerous.
If, as you say, we can’t expect to get safety gurantees, then safety assumptions are the best we can do. Or do you mean that we should wait for safety gurantees, but you won’t expect that gurantee to require an AI to prove?
How should I have worded the post to maximize merit?
If your core idea is to use decoherence prevention techniques as an AI boxing method, it would have helped to specifically mention such techniques rather than waiting until the comments to discuss them.
That’s Grover’s algorithm. Amplitude amplification is the generalization of the trick Grover’s algorithm uses, and applicable here.
You would need a much more rigorous argument to show that amplitude amplification is relevant here. (Yes, I did read your other post that you say is more detailed.) On the face of it, what you are saying appears to directly contradict the linearity of quantum mechanics.
If, as you say, we can’t expect to get safety gurantees, then safety assumptions are the best we can do. Or do you mean that we should wait for safety gurantees, but you won’t expect that gurantee to require an AI to prove?
My point is that I am not aware of any connection between uncertainty about whether an AI is safe and uncertainty about whether specific well-defined mathematical statements are true. You suggest that we can use your technique to reduce the latter uncertainty, thereby reducing the former uncertainty. But if the two uncertainties are not correlated, then reducing the latter does not reduce the former.
Here’s a safety assumption that might come up: “This prior over possible laws of physics is no more confused by these sensory inputs than the Solomonoff prior.”. Why would we want this? If we even just want to maximize diamonds, we can’t identify the representation of diamonds in the Solomonoff prior. If we use a prior over all possible atomic physics models, we can say that the amount of diamond is the amount of carbon atoms covalently bound to four other carbon atoms. If experiments later indicate quantum physics, the prior might desperately postulate a giant atomic computer that runs a simulation of a quantum physics universe. A diamond maximizer would then try to hack the simulation to rearrange the computer into diamonds. This query could tell us to keep looking for more general priors.
Is there any quantum computing which does not rely on decoherence prevention techniques? Coherence is what makes quantum computers work in the first place.
What level of argument would you like? Linearity means that the transition functions that map between timeline amplitude distributions be linear. That is indeed a necessary condition on the transition functions. It looks like you want to exclude all transition functions that are not permutations, ie that don’t preserve the distribrution’s histogram. The functions I use for amplitude amplification here are one which flips the amplitude of the timeline that ran the AI, and Grover’s diffusion operator, which redistributes amplitudes to the flipped one’s benefit. If linearity were a sufficient condition, I wouldn’t need these: I could simply use the function that maps the amplitudes of all timelines that did not run the AI to 0, which is linear.
I agree that the ZF-prover use of an AI box is only useful if we actually find a relevant mathematical statement. In the end, this post has no new insight on how useful AI boxes are, only how safe they can be made. Therefore I should make no claims about usefulness and remove the ZF section.
Amplitude amplification is a particular algorithm that is designed for a particular problem, namely searching a database for good entries. It doesn’t mean you can arbitrarily amplify quantum amplitudes.
I suppose I haven’t heard before the idea of boxing an AI using the (yet to be developed) techniques that are necessary to stop quantum decoherence. Maybe there is some merit to this.
I think running an AI on the basis that we haven’t found an argument that it is unsafe sounds dangerous. Also, I still don’t see why an argument that an AI is unsafe would depend on a difficult mathematical problem.
How should I have worded the post to maximize merit?
That’s Grover’s algorithm. Amplitude amplification is the generalization of the trick Grover’s algorithm uses, and applicable here.
If, as you say, we can’t expect to get safety gurantees, then safety assumptions are the best we can do. Or do you mean that we should wait for safety gurantees, but you won’t expect that gurantee to require an AI to prove?
If your core idea is to use decoherence prevention techniques as an AI boxing method, it would have helped to specifically mention such techniques rather than waiting until the comments to discuss them.
You would need a much more rigorous argument to show that amplitude amplification is relevant here. (Yes, I did read your other post that you say is more detailed.) On the face of it, what you are saying appears to directly contradict the linearity of quantum mechanics.
My point is that I am not aware of any connection between uncertainty about whether an AI is safe and uncertainty about whether specific well-defined mathematical statements are true. You suggest that we can use your technique to reduce the latter uncertainty, thereby reducing the former uncertainty. But if the two uncertainties are not correlated, then reducing the latter does not reduce the former.
Here’s a safety assumption that might come up: “This prior over possible laws of physics is no more confused by these sensory inputs than the Solomonoff prior.”. Why would we want this? If we even just want to maximize diamonds, we can’t identify the representation of diamonds in the Solomonoff prior. If we use a prior over all possible atomic physics models, we can say that the amount of diamond is the amount of carbon atoms covalently bound to four other carbon atoms. If experiments later indicate quantum physics, the prior might desperately postulate a giant atomic computer that runs a simulation of a quantum physics universe. A diamond maximizer would then try to hack the simulation to rearrange the computer into diamonds. This query could tell us to keep looking for more general priors.
Is there any quantum computing which does not rely on decoherence prevention techniques? Coherence is what makes quantum computers work in the first place.
What level of argument would you like? Linearity means that the transition functions that map between timeline amplitude distributions be linear. That is indeed a necessary condition on the transition functions. It looks like you want to exclude all transition functions that are not permutations, ie that don’t preserve the distribrution’s histogram. The functions I use for amplitude amplification here are one which flips the amplitude of the timeline that ran the AI, and Grover’s diffusion operator, which redistributes amplitudes to the flipped one’s benefit. If linearity were a sufficient condition, I wouldn’t need these: I could simply use the function that maps the amplitudes of all timelines that did not run the AI to 0, which is linear.
I agree that the ZF-prover use of an AI box is only useful if we actually find a relevant mathematical statement. In the end, this post has no new insight on how useful AI boxes are, only how safe they can be made. Therefore I should make no claims about usefulness and remove the ZF section.