The first seems misleading: what we need is a universal quantification over plausible stories, which I would guess requires understanding the behavior.
You get to iterate fast until you find an algorithm where it’s hard to think of failure stories. And you get to work on toy cases until you find an algorithm that actually works in all the toy cases. I think we’re a long way from meeting those bars, so that we’ll get to iterate fast for a while. After we meet those bars, it’s an open question how close we’d be to something that actually works. My suspicion is that we’d have the right basic shape of an algorithm (especially if we are good at thinking of possible failures).
One thing that I only understood here is that you want a solution such that we can’t think of a plausible scenario where it leads to egregious misalignment, not a solution such that there isn’t any such plausible scenario. I guess your reasons here are basically the same as the ones for using ascription universality with regard to a human’s epistemic perspective.
I feel like these distinctions aren’t important until we get to an algorithm for which we can’t think of a failure story (which feels a long way off). At that point the game kind of flips around, and we try to come up with a good story for why it’s impossible to come up with a failure story. Maybe that gives you a strong security argument. If not, then you have to keep trying on one side or the other, though I think you should definitely be starting to prioritize applied work more.
You get to iterate fast until you find an algorithm where it’s hard to think of failure stories. And you get to work on toy cases until you find an algorithm that actually works in all the toy cases. I think we’re a long way from meeting those bars, so that we’ll get to iterate fast for a while. After we meet those bars, it’s an open question how close we’d be to something that actually works. My suspicion is that we’d have the right basic shape of an algorithm (especially if we are good at thinking of possible failures).
I feel like these distinctions aren’t important until we get to an algorithm for which we can’t think of a failure story (which feels a long way off). At that point the game kind of flips around, and we try to come up with a good story for why it’s impossible to come up with a failure story. Maybe that gives you a strong security argument. If not, then you have to keep trying on one side or the other, though I think you should definitely be starting to prioritize applied work more.