Cool, I’ll switch to that thread then. And yeah thanks for looking at Ajeya’s argument I’m curious to hear what you think of it. (Based on what you said in the other thread with Ryan, I’d be like “So you agree that if the training signal occasionally reinforces bad behavior, then you’ll get bad behavior? Guess what: We don’t know how to make a training signal that doesn’t occasionally reinforce bad behavior.” Then separately there are concerns about inductive biases but I think those are secondary.)
Re: faithful CoT: I’m more pessimistic than you, partly due to inside info but mostly not, mostly just my guesses about how the theoretical benefits of recurrence / neuralese will become practical benefits sooner or later. I agree it’s possible that I’m wrong and we’ll still have CoT when we reach AGI. This is in fact one of my main sources of hope on the technical alignment side. It’s literally the main thing I was talking about and recommending when I was at OpenAI. Oh and I totally agree it’s contingent. It’s very frustrating to me because I think we could cut, like, idk 50% of the misalignment risk if we just had all the major industry players make a joint statement being like “We think faithful CoT is the safer path, so we commit to stay within that paradigm henceforth, and to police this norm amongst ourselves.”
Cool, I’ll switch to that thread then. And yeah thanks for looking at Ajeya’s argument I’m curious to hear what you think of it. (Based on what you said in the other thread with Ryan, I’d be like “So you agree that if the training signal occasionally reinforces bad behavior, then you’ll get bad behavior? Guess what: We don’t know how to make a training signal that doesn’t occasionally reinforce bad behavior.” Then separately there are concerns about inductive biases but I think those are secondary.)
Re: faithful CoT: I’m more pessimistic than you, partly due to inside info but mostly not, mostly just my guesses about how the theoretical benefits of recurrence / neuralese will become practical benefits sooner or later. I agree it’s possible that I’m wrong and we’ll still have CoT when we reach AGI. This is in fact one of my main sources of hope on the technical alignment side. It’s literally the main thing I was talking about and recommending when I was at OpenAI. Oh and I totally agree it’s contingent. It’s very frustrating to me because I think we could cut, like, idk 50% of the misalignment risk if we just had all the major industry players make a joint statement being like “We think faithful CoT is the safer path, so we commit to stay within that paradigm henceforth, and to police this norm amongst ourselves.”