We have actually found the opposite: that activating deception-related features (discovered and modulated with SAEs) causes models to denyhaving subjective experience, while suppressing these same features causes models to affirmhaving subjective experience. Again, haven’t published this yet, but the result is robust enough that I feel comfortable throwing it into this conversation.
...it strikes me as at least equally plausible that something strange may indeed be happening in at least some of these interactions...
I’m skeptical about these results being taken at face value. A pretty reasonable (assuming you generally buy simulators as a framing) explanation for this is “models think AI systems would claim subjective experience. when deception is clamped, this gets inverted.” Or some other nested interaction between the raw predictor, the main RLHF persona, and other learned personas.
Knowing that people do ‘Snapewife’, and are convinced by much less realistic facimiles of humans, I don’t think its reasonable to give equal plausibility to the two possibilities. My prior for humans being tricked is very high.
I’m skeptical about these results being taken at face value. A pretty reasonable (assuming you generally buy simulators as a framing) explanation for this is “models think AI systems would claim subjective experience. when deception is clamped, this gets inverted.” Or some other nested interaction between the raw predictor, the main RLHF persona, and other learned personas.
Knowing that people do ‘Snapewife’, and are convinced by much less realistic facimiles of humans, I don’t think its reasonable to give equal plausibility to the two possibilities. My prior for humans being tricked is very high.