Like suppose the AI immediately very publically does something that looks very unsafe. Say grabs control over the stop button and starts mass-producing paperclips in an extremely publically visible way. This would probably lead to people wanting to stop it. So therefore, if it has a policy like that, the |S conditional would lead to people quickly wanting to stop it. This means that in the |S branch, it can quickly determine whether it is in the f|S branch or the s|S branch; in the f|S case, it can then keep going with whatever optimization V specified, while in the s|S case, it can then immediately shut down itself.
But the reason I think the AI *wouldn’t* do this is, what about the |F branch? If you condition on humans not wanting to press the stop button even though there’s a clearly unaligned AI, what sort of situation could produce this? I have trouble imagining it, because it seems like it would need to be pretty extreme. The best ideas I can come up with is stuff like “black hole swallows the earth”, but this would rank pretty low in the AI’s utility function, and therefore it would avoid acting this way in order to have a reasonable |F branch.
But this does not seem like sane reasoning on the AI’s side to me, so it seems like this should be fixed. And of course, fixed in a principled rather than unprincipled way.
Like suppose the AI immediately very publically does something that looks very unsafe. Say grabs control over the stop button and starts mass-producing paperclips in an extremely publically visible way. This would probably lead to people wanting to stop it. So therefore, if it has a policy like that, the |S conditional would lead to people quickly wanting to stop it. This means that in the |S branch, it can quickly determine whether it is in the f|S branch or the s|S branch; in the f|S case, it can then keep going with whatever optimization V specified, while in the s|S case, it can then immediately shut down itself.
But the reason I think the AI *wouldn’t* do this is, what about the |F branch? If you condition on humans not wanting to press the stop button even though there’s a clearly unaligned AI, what sort of situation could produce this? I have trouble imagining it, because it seems like it would need to be pretty extreme. The best ideas I can come up with is stuff like “black hole swallows the earth”, but this would rank pretty low in the AI’s utility function, and therefore it would avoid acting this way in order to have a reasonable |F branch.
But this does not seem like sane reasoning on the AI’s side to me, so it seems like this should be fixed. And of course, fixed in a principled rather than unprincipled way.