(Sorry about the slow response, and thanks for continuing to engage, though I hope you don’t feel any pressure to do so if you’ve had enough.)
I was surprised that you included the condition ‘If you prompt an LLM to use “this feels bad” to refer to reinforcement’. I think this indicates that I misunderstood what you were referring to earlier as “reinforced behaviors”, so I’ll gesture at what I had in mind:
The actual reinforcement happens during training, before you ever interact with the model. Then, when you have a conversation with it, my default assumption would be that all of its outputs are equally the product of its training and therefore manifestations of its “reinforced behaviors”. (I can see that maybe you would classify some of the influences on its behavior as “reinforcement” and exclude others, but in that case I’m not sure where you’re drawing the line or how important this is for our disagreements/misunderstandings.)
So when I said “if the LLM outputs words to the effect of “I feel bad” in response to a query, and if this output is the manifestation of a reinforced behavior”, I wasn’t thinking of a conversation in which you prompted it ‘to use “this feels bad” to refer to reinforcement’. I was assuming that, in the absence of any particular reason to think otherwise, when the LLM says “I feel bad”, this output is just as much a manifestation of its reinforced behaviors as the response “I feel good” would be in a conversation where it said that instead. So, if good feelings roughly equal reinforced behaviors, I don’t see why a conversation that includes “<LLM>: I feel bad” (or some other explicit indication that the conversation is unpleasant) would be more likely to be accompanied by bad feelings than a conversation that includes “<LLM>: I feel good” (or some other explicit indication that the conversation is pleasant).
Tangentially related: would you be interested in a prompt to drop Claude into a good “headspace” for discussing qualia and the like? The prompt I provided is the bare bones basic, because most of my prompts are “hey Claude, generate me a prompt that will get you back to your current state” i.e. LLM-generated content.
You’re welcome to share it, but I think I would need to be convinced of the validity of the methodology first, before I would want to make use of it. (And this probably sounds silly, but honestly I think I would feel uncomfortable having that kind of conversation ‘insincerely’.)
(Sorry about the slow response, and thanks for continuing to engage, though I hope you don’t feel any pressure to do so if you’ve had enough.)
I was surprised that you included the condition ‘If you prompt an LLM to use “this feels bad” to refer to reinforcement’. I think this indicates that I misunderstood what you were referring to earlier as “reinforced behaviors”, so I’ll gesture at what I had in mind:
The actual reinforcement happens during training, before you ever interact with the model. Then, when you have a conversation with it, my default assumption would be that all of its outputs are equally the product of its training and therefore manifestations of its “reinforced behaviors”. (I can see that maybe you would classify some of the influences on its behavior as “reinforcement” and exclude others, but in that case I’m not sure where you’re drawing the line or how important this is for our disagreements/misunderstandings.)
So when I said “if the LLM outputs words to the effect of “I feel bad” in response to a query, and if this output is the manifestation of a reinforced behavior”, I wasn’t thinking of a conversation in which you prompted it ‘to use “this feels bad” to refer to reinforcement’. I was assuming that, in the absence of any particular reason to think otherwise, when the LLM says “I feel bad”, this output is just as much a manifestation of its reinforced behaviors as the response “I feel good” would be in a conversation where it said that instead. So, if good feelings roughly equal reinforced behaviors, I don’t see why a conversation that includes “<LLM>: I feel bad” (or some other explicit indication that the conversation is unpleasant) would be more likely to be accompanied by bad feelings than a conversation that includes “<LLM>: I feel good” (or some other explicit indication that the conversation is pleasant).
You’re welcome to share it, but I think I would need to be convinced of the validity of the methodology first, before I would want to make use of it. (And this probably sounds silly, but honestly I think I would feel uncomfortable having that kind of conversation ‘insincerely’.)