Any scary AI that’s worth it salt can put itself in your shoes and consider what it would do in your situation, and this way figure out motive behind any weird set ups like these.
On top of that, the shortest code for a complex task is also extremely obfuscated.
Laws of our universe themselves lead to at least one fairly nasty intelligence (ours, the mankind as intelligence) which still badly wants to maximize paperclips outside, i mean, souls in heaven, without anyone ever giving it a hint that there’s outside or providing it with any plausible way of maximizing anything outside. How the hell would anyone deduce this from the laws of universe? Short of running and taking a look.
Any scary AI that’s worth it salt can put itself in your shoes and consider what it would do in your situation, and this way figure out motive behind any weird set ups like these.
Makes no difference, that’s the beauty of it. The AI can know everything there is to know, and the ‘trap’ would still work (it’s not really a trap, as the AI does not have false beliefs; it’s more that there is simple modification to its ‘desires’ so that it’s willing to fall into the trap, and this can be done without understanding it’s desires at all.
On top of that, the shortest code for a complex task is also extremely obfuscated.
Yes. But not deliberately so; not meant to conceal the specific treacherous intent. I’m sure the example can be extended so that the code is not obfuscated.
Makes no difference, that’s the beauty of it. The AI can know everything there is to know, and the ‘trap’ would still work (it’s not really a trap, as the AI does not have false beliefs; it’s more that there is simple modification to its ‘desires’ so that it’s willing to fall into the trap, and this can be done without understanding it’s desires at all.
from the post: “the master AI will act as if it believed X=0 with certainty, while we have setup X so that X=1 with near-certainty.”
Sounds like false belief all right.
Suppose I am the AI. Why exactly would I want to output code in which you are going to be able to find malicious intent and then delete this code?
You’re proposing an unfriendly AI that has some sort of glitch inside which makes it behave in insane manner?
edit: or am i missing some really silly premise, such as knowledge of the potentially unfriendly AI’s “utility function” and ability to change it? Well then change it to something nice, like producing exactly 100 paperclips ever having utility of 1, while anything else utility of 0. Then you got yourself 100 paperclips, done.
edit: ok, read one of your earlier posts. Interesting notion but it would be great if you could make an example of some kind here. For example consider the chess AI with utility of victory =1 , utility of draw = 0 , utility of loss = −1 plus the AI’s strategic utility of simple material advantage which AI had derived after reasoning that maximization of material advantage in near term (~10 ply) will get it to win in the long term (a real world AI is likely to invent some short term prediction based strategies for itself to achieve any long term goals, due to the difficulty of making long term predictions).
Then outline how exactly will you edit such chess AI to make it not move the queen away when its attacked, in some early-game position where queen is under attack. In such a way that doesn’t make the AI entirely indifferent to winning the game, would be great. Feel free to add probabilistic elements to chess, e.g. add low probability of failure to every capture. Go on and take a chess AI like crafty, and see if you can edit it into indifference and how much analysis work that might be.
Or maybe human intelligence as example. What sort of thing would you have to convince me in, to be indifferent?
Then outline how exactly will you edit such chess AI to make it not move the queen away when its attacked, in some early-game position where queen is under attack. In such a way that doesn’t make the AI entirely indifferent to winning the game, would be great.
It calculates the expected utility A of moving the queen, according to its best guess as to what would happen in the game (including its own likely future moves). It calculates the expected utility B of not moving the queen, according to the same best guess.
Then if it chooses to not move the queen, it gets a free one-time utility boost of A-B, that is independent of all other utility it might or might not have. And then it plays normally afterwards (ignoring the extra A-B; that’s just a free bonus).
Or maybe human intelligence as example. What sort of thing would you have to convince me in, to be indifferent?
Compensate you for the expected difference between your choices.
Now consider AI that’s playing chess from first principles. It values victory over tie, and tie over loss . The values can be 1, 0, −1. But it can only see up to 10 moves ahead, and there’s no victory, tie, or loss so soon. So it thinks and thinks and comes up with a strategy: it makes a second utility function—the material advantage 10 moves ahead—which it can maximize now, and working to maximize which is likely to bring it to the victory, even though AI does not know exactly how.
Now, if you try to change the utility function for the move in the now, the AI will reason that your utility function is going to make it lose the game. And you can’t do this trick to AI’s ultimate utility because AI does not itself know how a move will affect ultimate utility, the AI did not bother to calculate the values for you to set to the equal. It did not even bother to calculate what is the effect of strategy on ultimate utility. It just generated a strategy starting from the ultimate utility (not by trying a zillion strategies and calculating their guessed impact on final utility).
It can be said that AI optimized out the two real numbers (utilities) and their comparison. When you have a goal to maximize future utility, it doesn’t mean you’ll be calculating real numbers to infinite precision, and then comparing them, in the idealized agent way, to pick between moves. You can start from future utility, think about it, and come up with some strategy in the now that will work to maximize future utility, even though it doesn’t calculate future utility.
I myself worked as ‘utility maximizing agent’ trying to maximize the accuracy of my computer graphics software. I do so typically without calculating impact of code i write on final accuracy—it is impractical—i can however reason about what actions will make it larger than other actions would—again, most often than not, without calculating some two real numbers and then comparing them.
And you can’t do this trick to AI’s ultimate utility because AI does not itself know how a move will affect ultimate utility
You would do it for the AIs ultimate utility (maybe, for instance, making it indifferent to the third move), and let the AI implement the best sub-strategy it could. Then it would take into account the “third move is special” when designing it’s material advantage (easiest way of doing this: the first three moves are irrelevant).
The strategic utility function is “maximise material advantage ten turns ahead (starting from the fourth turn).
For the overal utility function, we need a default third move A. The expected utility of A (according to the old utility U) is EU(A). Then the new utility function V, where V=U—EU(Z) + EU(A), where Z was the actual move played on turn 3.
Except the AI that’s worth it’s salt as far as danger goes does not, in fact, calculate EU(Z) or EU(A). It did not produce a function that calculates expected overall utility of a move, because it couldn’t, it takes too much computing power, it’s a bad approach. It did look at the final board state’s utility function (win/draw/loss one), and it did look at the rules of the game, and it did some thinking—how can I , without being able to calculate EU(Z) , make moves that would work—and came up with an approach based on that function. (Incidentally this approach is applicable only to fairly simple utility functions of some future state.)
An AI needs to be programmed in a specific, very under optimized way to allow you to make that sort of modification you’re proposing here.
Keep in mind that neat real valued utility functions are an entirely abstract, idealized model, used to reason about idealized decisionmaking by an agent that got infinite computing power and such. The real world AI has limited computing power, and the name of the game is to make the best use of computing power available, which means making decisions that help to increase the utility without ever calculating the utility directly or doing some comparisons between real numbers. Such AI, running under an utility function will have a lot of code that’s derived to help increase utility but doesn’t do it by calculating the utility. Then it would be impossible to just change it. Efficient code is unflexible.
Furthermore, sufficiently inefficient AI—such as idealized utility maximizing one where you just go ahead and replace one utility with another—which doesn’t self optimize beyond naive approach—is not much of a threat, even having a lot of computational power. The trees expand exponentially with depth. The depth is logarithmic in computing power.
edit: here is an example. The utility maximization and utility functions are to practical (and scary) AI as quantum chromodynamics is to practical computer graphics software I do for living . That is to say, you would probably have as much luck modifying AI’s behaviour by editing utility functions as you’d have of editing my cloud renderer to draw pink clouds by using modified standard model.
it did look at the rules of the game, and it did some thinking—how can I , without being able to calculate EU(Z) , make moves that would work—and came up with an approach based on that function. (Incidentally this approach is applicable only to fairly simple utility functions of some future state.)
And that’s where it comes up with: play randomly for three moves, then apply the material advantage process. This maximises the new utility function, without needing to calculate EU(Z) (or EU(A)).
An AI needs to be programmed in a specific, very under optimized way to allow you to make that sort of modification you’re proposing here.
Specific, certainly; under-optimised is debatable.
For a seed AI, we can build the the indifference in early, and under broad conditions, be certain that it will retain the indifference at a later step.
And why exactly is this ‘play randomly for 3 moves then applying material advantage’ gives better utility than just applying material advantage?
Plus you got yourself some utility function that is entirely ill defined in a screwy self referential way (as the expected utility of a move ultimately depends to the AI itself and it’s ability to use resulting state after the move to it’s advantage). You can talk about it in words but you didn’t define it other than ‘okay now it will make ai indifferent’.
To be contrasted with original well defined utility function of future states; the AI may be unable to predict the future states, and calculate some utility numbers to assign to moves, but it can calculate utility of particular end-state of the board, and it can reason from this to strategies. There’s simple thing for it to reason about, originally. I can write python code that looks at board, and tells the available legal moves or the win/loss/tie utility if it is end state. That is the definition of chess utility. AI can take it and reason about it. You instead have some utility that feeds back AI’s own conclusions about utility of potential moves into utility function.
And why exactly is this ‘play randomly for 3 moves then applying material advantage’ gives better utility than just applying material advantage?
In this instance, they won’t differ at all. But if the AI had some preferences outside of the chess board, then the indifferent AI would be open to playing any particular move (for the first three turns) in exchange for some other separate utility gain.
Plus you got yourself some utility function that is entirely ill defined in a screwy self referential way
In fact no. It seems like that, because of the informal language I used, but the utility function is perfectly well defined without any reference to the AI. The only self-reference is the usual one—how do I predict my future actions now.
If you mean that an indifferent utility can make these predictions harder/more necessary in some circumstances, then you are correct—but this seems trivial for a superintelligence.
Any scary AI that’s worth it salt can put itself in your shoes and consider what it would do in your situation, and this way figure out motive behind any weird set ups like these.
On top of that, the shortest code for a complex task is also extremely obfuscated.
Laws of our universe themselves lead to at least one fairly nasty intelligence (ours, the mankind as intelligence) which still badly wants to maximize paperclips outside, i mean, souls in heaven, without anyone ever giving it a hint that there’s outside or providing it with any plausible way of maximizing anything outside. How the hell would anyone deduce this from the laws of universe? Short of running and taking a look.
Halting problem is a serious thing.
Makes no difference, that’s the beauty of it. The AI can know everything there is to know, and the ‘trap’ would still work (it’s not really a trap, as the AI does not have false beliefs; it’s more that there is simple modification to its ‘desires’ so that it’s willing to fall into the trap, and this can be done without understanding it’s desires at all.
Yes. But not deliberately so; not meant to conceal the specific treacherous intent. I’m sure the example can be extended so that the code is not obfuscated.
from the post: “the master AI will act as if it believed X=0 with certainty, while we have setup X so that X=1 with near-certainty.”
Sounds like false belief all right.
Suppose I am the AI. Why exactly would I want to output code in which you are going to be able to find malicious intent and then delete this code?
You’re proposing an unfriendly AI that has some sort of glitch inside which makes it behave in insane manner?
edit: or am i missing some really silly premise, such as knowledge of the potentially unfriendly AI’s “utility function” and ability to change it? Well then change it to something nice, like producing exactly 100 paperclips ever having utility of 1, while anything else utility of 0. Then you got yourself 100 paperclips, done.
edit: ok, read one of your earlier posts. Interesting notion but it would be great if you could make an example of some kind here. For example consider the chess AI with utility of victory =1 , utility of draw = 0 , utility of loss = −1 plus the AI’s strategic utility of simple material advantage which AI had derived after reasoning that maximization of material advantage in near term (~10 ply) will get it to win in the long term (a real world AI is likely to invent some short term prediction based strategies for itself to achieve any long term goals, due to the difficulty of making long term predictions).
Then outline how exactly will you edit such chess AI to make it not move the queen away when its attacked, in some early-game position where queen is under attack. In such a way that doesn’t make the AI entirely indifferent to winning the game, would be great. Feel free to add probabilistic elements to chess, e.g. add low probability of failure to every capture. Go on and take a chess AI like crafty, and see if you can edit it into indifference and how much analysis work that might be.
Or maybe human intelligence as example. What sort of thing would you have to convince me in, to be indifferent?
It calculates the expected utility A of moving the queen, according to its best guess as to what would happen in the game (including its own likely future moves). It calculates the expected utility B of not moving the queen, according to the same best guess.
Then if it chooses to not move the queen, it gets a free one-time utility boost of A-B, that is independent of all other utility it might or might not have. And then it plays normally afterwards (ignoring the extra A-B; that’s just a free bonus).
Compensate you for the expected difference between your choices.
Now consider AI that’s playing chess from first principles. It values victory over tie, and tie over loss . The values can be 1, 0, −1. But it can only see up to 10 moves ahead, and there’s no victory, tie, or loss so soon. So it thinks and thinks and comes up with a strategy: it makes a second utility function—the material advantage 10 moves ahead—which it can maximize now, and working to maximize which is likely to bring it to the victory, even though AI does not know exactly how.
Now, if you try to change the utility function for the move in the now, the AI will reason that your utility function is going to make it lose the game. And you can’t do this trick to AI’s ultimate utility because AI does not itself know how a move will affect ultimate utility, the AI did not bother to calculate the values for you to set to the equal. It did not even bother to calculate what is the effect of strategy on ultimate utility. It just generated a strategy starting from the ultimate utility (not by trying a zillion strategies and calculating their guessed impact on final utility).
It can be said that AI optimized out the two real numbers (utilities) and their comparison. When you have a goal to maximize future utility, it doesn’t mean you’ll be calculating real numbers to infinite precision, and then comparing them, in the idealized agent way, to pick between moves. You can start from future utility, think about it, and come up with some strategy in the now that will work to maximize future utility, even though it doesn’t calculate future utility.
I myself worked as ‘utility maximizing agent’ trying to maximize the accuracy of my computer graphics software. I do so typically without calculating impact of code i write on final accuracy—it is impractical—i can however reason about what actions will make it larger than other actions would—again, most often than not, without calculating some two real numbers and then comparing them.
You would do it for the AIs ultimate utility (maybe, for instance, making it indifferent to the third move), and let the AI implement the best sub-strategy it could. Then it would take into account the “third move is special” when designing it’s material advantage (easiest way of doing this: the first three moves are irrelevant).
So the utility function will be what?
win>tie>lose BUT the win resulting from third move = tie resulting from third move = loss resulting from third move?
The strategic utility function is “maximise material advantage ten turns ahead (starting from the fourth turn).
For the overal utility function, we need a default third move A. The expected utility of A (according to the old utility U) is EU(A). Then the new utility function V, where V=U—EU(Z) + EU(A), where Z was the actual move played on turn 3.
Except the AI that’s worth it’s salt as far as danger goes does not, in fact, calculate EU(Z) or EU(A). It did not produce a function that calculates expected overall utility of a move, because it couldn’t, it takes too much computing power, it’s a bad approach. It did look at the final board state’s utility function (win/draw/loss one), and it did look at the rules of the game, and it did some thinking—how can I , without being able to calculate EU(Z) , make moves that would work—and came up with an approach based on that function. (Incidentally this approach is applicable only to fairly simple utility functions of some future state.)
An AI needs to be programmed in a specific, very under optimized way to allow you to make that sort of modification you’re proposing here.
Keep in mind that neat real valued utility functions are an entirely abstract, idealized model, used to reason about idealized decisionmaking by an agent that got infinite computing power and such. The real world AI has limited computing power, and the name of the game is to make the best use of computing power available, which means making decisions that help to increase the utility without ever calculating the utility directly or doing some comparisons between real numbers. Such AI, running under an utility function will have a lot of code that’s derived to help increase utility but doesn’t do it by calculating the utility. Then it would be impossible to just change it. Efficient code is unflexible.
Furthermore, sufficiently inefficient AI—such as idealized utility maximizing one where you just go ahead and replace one utility with another—which doesn’t self optimize beyond naive approach—is not much of a threat, even having a lot of computational power. The trees expand exponentially with depth. The depth is logarithmic in computing power.
edit: here is an example. The utility maximization and utility functions are to practical (and scary) AI as quantum chromodynamics is to practical computer graphics software I do for living . That is to say, you would probably have as much luck modifying AI’s behaviour by editing utility functions as you’d have of editing my cloud renderer to draw pink clouds by using modified standard model.
And that’s where it comes up with: play randomly for three moves, then apply the material advantage process. This maximises the new utility function, without needing to calculate EU(Z) (or EU(A)).
Specific, certainly; under-optimised is debatable.
For a seed AI, we can build the the indifference in early, and under broad conditions, be certain that it will retain the indifference at a later step.
And why exactly is this ‘play randomly for 3 moves then applying material advantage’ gives better utility than just applying material advantage?
Plus you got yourself some utility function that is entirely ill defined in a screwy self referential way (as the expected utility of a move ultimately depends to the AI itself and it’s ability to use resulting state after the move to it’s advantage). You can talk about it in words but you didn’t define it other than ‘okay now it will make ai indifferent’.
To be contrasted with original well defined utility function of future states; the AI may be unable to predict the future states, and calculate some utility numbers to assign to moves, but it can calculate utility of particular end-state of the board, and it can reason from this to strategies. There’s simple thing for it to reason about, originally. I can write python code that looks at board, and tells the available legal moves or the win/loss/tie utility if it is end state. That is the definition of chess utility. AI can take it and reason about it. You instead have some utility that feeds back AI’s own conclusions about utility of potential moves into utility function.
In this instance, they won’t differ at all. But if the AI had some preferences outside of the chess board, then the indifferent AI would be open to playing any particular move (for the first three turns) in exchange for some other separate utility gain.
In fact no. It seems like that, because of the informal language I used, but the utility function is perfectly well defined without any reference to the AI. The only self-reference is the usual one—how do I predict my future actions now.
If you mean that an indifferent utility can make these predictions harder/more necessary in some circumstances, then you are correct—but this seems trivial for a superintelligence.