In some sense you could start from the trivial story “Your algorithm didn’t work and then something bad happened.” Then the “search for stories” step is really just trying to figure out if the trivial story is plausible. I think that’s pretty similar to a story like: “You can’t control what your model thinks, so in some new situation it decides to kill you.”
To fill in the details more:
Assume that we’re finding an algorithm to train an agent with a sufficiently large action space (i.e. we don’t get safety via the agent having such a restricted action space that it can’t do anything unsafe).
It seems like in some sense the game is in constraining the agent’s cognition to be such that it is “safe” and “useful”. The point of designing alignment algorithms is to impose such constraints, without requiring so much effort as to make the resulting agent useless / uncompetitive.
However, there are always going to be some plausible circumstances that we didn’t consider (even if we’re talking about amplified humans, which are still bounded agents). Even if we had maximal ability to place constraints on agent cognition, whatever constraints we do place won’t have been tested in these unconsidered plausible circumstances. It is always possible that one misfires in a way that makes the agent do something unsafe.
(This wouldn’t be true if we had some sort of proof against misfiring, that doesn’t assume anything about what circumstances the agent experiences, but that seems ~impossible to get. I’m pretty sure you agree with that.)
More generally, this story is going to be something like:
Suppose you trained your model M to do X using algorithm A.
Unfortunately, when designing algorithm A / constraining M with A, you (or amplified-you) failed to consider circumstance C as a possible situation that might happen.
As a result, the model learned heuristic H, that works in all the circumstances you did consider, but fails in circumstance C.
Circumstance C then happens in the real world, leading to an actual failure.
Obviously, I can’t usually instantiate M, X, A, C, and H such that the story works for an amplified human (since they can presumably think of anything I can think of). And I’m not arguing that any of this is probable. However, it seems to meet your bar of “plausible”:
there is some way to fill in the rest of the details that’s consistent with everything I know about the world.
EDIT: Or maybe more accurately, I’m not sure how exactly the stories you tell are different / more concrete than the ones above.
----
When I say you have “a better defined sense of what does and doesn’t count as a valid step 2”, I mean that there’s something in your head that disallows the story I wrote above, but allows the stories that you generally use, and I don’t know what that something is; and that’s why I would have a hard time applying your methodology myself.
----
Possible analogy / intuition pump for the general story I gave above: Human cognition is only competent in particular domains and must be relearned in new domains (like protein folding) or new circumstances (like when COVID-19 hits), and sometimes human cognition isn’t up to the task (like when being teleported to a universe with different physics and immediately dying), or doesn’t do so in a way that agrees with other humans (like how some humans would push a button that automatically wirehead everyone for all time, while others would find that abhorrent).
As a result, the model learned heuristic H, that works in all the circumstances you did consider, but fails in circumstance C.
That’s basically where I start, but then I want to try to tell some story about why it kills you, i.e. what is it about the heuristic H and circumstance C that causes it to kill you?
I agree this involves discretion, and indeed moving beyond the trivial story “The algorithm fails and then it turns out you die” requires discretion, since those stories are certainly plausible. The other extreme would be to require us to keep making the story more and more concrete until we had fully specified the model, which also seems intractable. So instead I’m doing some in between thing, which is roughly like: I’m allowed to push on the story to make it more concrete along any axis, but I recognize that I won’t have time to pin down every axis so I’m basically only going to do this a bounded number of times before I have to admit that it seems plausible enough (so I can’t fill in a billion parameters of my model one by one this way; what’s worse, filling in those parameters would take even more than a billion time and so this may become intractable even before you get to a billion).
I agree this involves discretion [...] So instead I’m doing some in between thing
Yeah, I think I feel like that’s the part where I don’t think I could replicate your intuitions (yet).
I don’t think we disagree; I’m just noting that this methodology requires a fair amount of intuition / discretion, and I don’t feel like I could do this myself. This is much more a statement about what I can do, rather than a statement about how good the methodology is on some absolute scale.
(Probably I could have been clearer about this in the original opinion.)
To fill in the details more:
Assume that we’re finding an algorithm to train an agent with a sufficiently large action space (i.e. we don’t get safety via the agent having such a restricted action space that it can’t do anything unsafe).
It seems like in some sense the game is in constraining the agent’s cognition to be such that it is “safe” and “useful”. The point of designing alignment algorithms is to impose such constraints, without requiring so much effort as to make the resulting agent useless / uncompetitive.
However, there are always going to be some plausible circumstances that we didn’t consider (even if we’re talking about amplified humans, which are still bounded agents). Even if we had maximal ability to place constraints on agent cognition, whatever constraints we do place won’t have been tested in these unconsidered plausible circumstances. It is always possible that one misfires in a way that makes the agent do something unsafe.
(This wouldn’t be true if we had some sort of proof against misfiring, that doesn’t assume anything about what circumstances the agent experiences, but that seems ~impossible to get. I’m pretty sure you agree with that.)
More generally, this story is going to be something like:
Suppose you trained your model M to do X using algorithm A.
Unfortunately, when designing algorithm A / constraining M with A, you (or amplified-you) failed to consider circumstance C as a possible situation that might happen.
As a result, the model learned heuristic H, that works in all the circumstances you did consider, but fails in circumstance C.
Circumstance C then happens in the real world, leading to an actual failure.
Obviously, I can’t usually instantiate M, X, A, C, and H such that the story works for an amplified human (since they can presumably think of anything I can think of). And I’m not arguing that any of this is probable. However, it seems to meet your bar of “plausible”:
EDIT: Or maybe more accurately, I’m not sure how exactly the stories you tell are different / more concrete than the ones above.
----
When I say you have “a better defined sense of what does and doesn’t count as a valid step 2”, I mean that there’s something in your head that disallows the story I wrote above, but allows the stories that you generally use, and I don’t know what that something is; and that’s why I would have a hard time applying your methodology myself.
----
Possible analogy / intuition pump for the general story I gave above: Human cognition is only competent in particular domains and must be relearned in new domains (like protein folding) or new circumstances (like when COVID-19 hits), and sometimes human cognition isn’t up to the task (like when being teleported to a universe with different physics and immediately dying), or doesn’t do so in a way that agrees with other humans (like how some humans would push a button that automatically wirehead everyone for all time, while others would find that abhorrent).
That’s basically where I start, but then I want to try to tell some story about why it kills you, i.e. what is it about the heuristic H and circumstance C that causes it to kill you?
I agree this involves discretion, and indeed moving beyond the trivial story “The algorithm fails and then it turns out you die” requires discretion, since those stories are certainly plausible. The other extreme would be to require us to keep making the story more and more concrete until we had fully specified the model, which also seems intractable. So instead I’m doing some in between thing, which is roughly like: I’m allowed to push on the story to make it more concrete along any axis, but I recognize that I won’t have time to pin down every axis so I’m basically only going to do this a bounded number of times before I have to admit that it seems plausible enough (so I can’t fill in a billion parameters of my model one by one this way; what’s worse, filling in those parameters would take even more than a billion time and so this may become intractable even before you get to a billion).
Yeah, I think I feel like that’s the part where I don’t think I could replicate your intuitions (yet).
I don’t think we disagree; I’m just noting that this methodology requires a fair amount of intuition / discretion, and I don’t feel like I could do this myself. This is much more a statement about what I can do, rather than a statement about how good the methodology is on some absolute scale.
(Probably I could have been clearer about this in the original opinion.)