My perspective (well, the one that came to me during this conversation) is indeed “I don’t want to take cocaine → human-level RL is not the full story”. That our attachment to real world outcomes and reluctance to wirehead is due to evolution-level RL, not human-level. So I’m not quite saying all plans will fail; but I am indeed saying that plans relying only on RL within the agent itself will have wireheading as attractor, and it might be better to look at other plans.
It’s just awfully delicate. If the agent is really dumb, it will enjoy watching videos of the button being pressed (after all, they cause the same sensory experiences as watching the actual button being pressed). Make the agent a bit smarter, because we want it to be useful, and it’ll begin to care about the actual button being pressed. But add another increment of smart, overshoot just a little bit, and it’ll start to realize that behind the button there’s a wire, and the wire leads to the agent’s own reward circuit and so on.
Can you engineer things just right, so the agent learns to care about just the right level of “realness”? I don’t know, but I think in our case evolution took a different path. It did a bunch of learning by itself, and saddled us with the result: “you’ll care about reality in this specific way”. So maybe when we build artificial agents, we should also do a bunch of learning outside the agent to capture the “realness”? That’s the point I was trying to make a couple comments ago, but maybe didn’t phrase it well.
Thanks! I’m assuming continuous online learning (as is often the case for RL agents, but is less common in an LLM context). So if the agent sees a video of the button being pressed, they would not feel a reward immediately afterwards, and they would say “oh, that’s not the real thing”.
(In the case of humans, imagine a person who has always liked listening to jazz, but right now she’s clinically depressed, so she turns on some jazz, but finds that it doesn’t feel rewarding or enjoyable, and then turns it off and probably won’t bother even trying again in the future.)
Wireheading is indeed an attractor, just like getting hooked on an addictive drug is an attractor. As soon as you try it, your value function will update, and then you’ll want to do it again. But before you try it, your value function has not updated, and it’s that not-updated value function that gets to evaluate whether taking an addictive drug is a good plan or bad plan. See also my discussion of “observation-utility agents” here. I don’t think you can get hooked on addictive drugs just by deeply understanding how they work.
So by the same token, it’s possible for our hypothetical agent to think that the pressing of the actual wired-up button is the best thing in the world. Cutting into the wall and shorting the wire would be bad, because it would destroy the thing that is best in the world, while also brainwashing me to not even care about the button, which adds insult to injury. This isn’t a false belief—it’s an ought not an is. I don’t think it’s reflectively-unstable either.
My perspective (well, the one that came to me during this conversation) is indeed “I don’t want to take cocaine → human-level RL is not the full story”. That our attachment to real world outcomes and reluctance to wirehead is due to evolution-level RL, not human-level. So I’m not quite saying all plans will fail; but I am indeed saying that plans relying only on RL within the agent itself will have wireheading as attractor, and it might be better to look at other plans.
It’s just awfully delicate. If the agent is really dumb, it will enjoy watching videos of the button being pressed (after all, they cause the same sensory experiences as watching the actual button being pressed). Make the agent a bit smarter, because we want it to be useful, and it’ll begin to care about the actual button being pressed. But add another increment of smart, overshoot just a little bit, and it’ll start to realize that behind the button there’s a wire, and the wire leads to the agent’s own reward circuit and so on.
Can you engineer things just right, so the agent learns to care about just the right level of “realness”? I don’t know, but I think in our case evolution took a different path. It did a bunch of learning by itself, and saddled us with the result: “you’ll care about reality in this specific way”. So maybe when we build artificial agents, we should also do a bunch of learning outside the agent to capture the “realness”? That’s the point I was trying to make a couple comments ago, but maybe didn’t phrase it well.
Thanks! I’m assuming continuous online learning (as is often the case for RL agents, but is less common in an LLM context). So if the agent sees a video of the button being pressed, they would not feel a reward immediately afterwards, and they would say “oh, that’s not the real thing”.
(In the case of humans, imagine a person who has always liked listening to jazz, but right now she’s clinically depressed, so she turns on some jazz, but finds that it doesn’t feel rewarding or enjoyable, and then turns it off and probably won’t bother even trying again in the future.)
Wireheading is indeed an attractor, just like getting hooked on an addictive drug is an attractor. As soon as you try it, your value function will update, and then you’ll want to do it again. But before you try it, your value function has not updated, and it’s that not-updated value function that gets to evaluate whether taking an addictive drug is a good plan or bad plan. See also my discussion of “observation-utility agents” here. I don’t think you can get hooked on addictive drugs just by deeply understanding how they work.
So by the same token, it’s possible for our hypothetical agent to think that the pressing of the actual wired-up button is the best thing in the world. Cutting into the wall and shorting the wire would be bad, because it would destroy the thing that is best in the world, while also brainwashing me to not even care about the button, which adds insult to injury. This isn’t a false belief—it’s an ought not an is. I don’t think it’s reflectively-unstable either.