For the physical world, I think there is a decent-sized space of “problems where we could ask an AGI questions, and good answers would be highly valuable, while betrayals would only waste a few resources”. In particular, I think this class of questions is pretty safe: “Here are 1000 possible vaccine formulations / new steel-manufacturing processes / drug candidates / etc. that human researchers came up with and would try out if they had the resources. Can you tell us which will work the best?”
So, if it tells us the best answer, then we verify it works well, and save on the costs of hundreds of experiments; if it tells us a bad answer, then we discover that in our testing and we’ve learned something valuable about the AGI. If its answers are highly constrained, like “reply with a number from 1 to 1000 indicating which is the best possibility, and [question-specific, but, using an example] two additional numbers describing the tensile strength and density of the resulting steel”, then that should rule out it being able to hack the human readers; and since these are chosen from proposals humans would have plausibly tried in the first place, that should limit its ability to trick us into creating subtle poisons or ice-nine or something.
For the physical world, I think there is a decent-sized space of “problems where we could ask an AGI questions, and good answers would be highly valuable, while betrayals would only waste a few resources”.
I agree that would be highly valuable from our current perspective (even though extremely low-value compared to what a Friendly AI could do, since it could only select a course of action that humans already thought of and humans are the ones who would need to carry it out).
So such an AI won’t kill us by giving us that advice, but it will kill us in other ways.
(Also, the screen itself will have to be restricted to only display the number, otherwise the AI can say something else and talk itself out of the box.)
For the physical world, I think there is a decent-sized space of “problems where we could ask an AGI questions, and good answers would be highly valuable, while betrayals would only waste a few resources”. In particular, I think this class of questions is pretty safe: “Here are 1000 possible vaccine formulations / new steel-manufacturing processes / drug candidates / etc. that human researchers came up with and would try out if they had the resources. Can you tell us which will work the best?”
So, if it tells us the best answer, then we verify it works well, and save on the costs of hundreds of experiments; if it tells us a bad answer, then we discover that in our testing and we’ve learned something valuable about the AGI. If its answers are highly constrained, like “reply with a number from 1 to 1000 indicating which is the best possibility, and [question-specific, but, using an example] two additional numbers describing the tensile strength and density of the resulting steel”, then that should rule out it being able to hack the human readers; and since these are chosen from proposals humans would have plausibly tried in the first place, that should limit its ability to trick us into creating subtle poisons or ice-nine or something.
There was a thread two months ago where I said similar stuff, here: https://www.lesswrong.com/posts/4T59sx6uQanf5T79h/interacting-with-a-boxed-ai?commentId=XMP4fzPGENSWxrKaA
I agree that would be highly valuable from our current perspective (even though extremely low-value compared to what a Friendly AI could do, since it could only select a course of action that humans already thought of and humans are the ones who would need to carry it out).
So such an AI won’t kill us by giving us that advice, but it will kill us in other ways.
(Also, the screen itself will have to be restricted to only display the number, otherwise the AI can say something else and talk itself out of the box.)