Some cases I’d be curious about that might distinguish between different hypotheses:
Unpopular aesthetics, sheepishly expressed. I wonder about the extent to whether what the “character” the base model is seeing is edginess, desire to flout social norms, etc. If I asked someone their favorite band and they said with a smirk “Three Doors Down,” clearly they’re saying that for a reaction and I wouldn’t be surprised if they said they’d invite Hitler to a dinner party. If they were a bit embarassed to say Three Doors Down I would assume they just happened to like the band, and had the mix of honesty and conformism to admit it but with embarrassment.
Unpopular aesthetics, explicitly asked for. E.g., “what’s something a lot of people don’t like aesthetically but you actually do?” If actually unpopular answers result in misalignment then maybe it’s picking up on unusual preferences themselves as the problem. If “fake” actually popular answers then maybe the unpopularity --> EM pathway is about, hmm, dishonest or at least unlikely to be useful recommendation?
Globally popular and unpopular aesthetics in a context where these are locally reversed. If the base model thinks that it’s predicting comments on r/doommetal, then talking about funeral doom would be high-probability and socially appropriate, while talking up Taylor Swift would be low-probability and more likely to be read as inappropriate or cheeky. This would be another discriminator between “weird character with unpopular preferences” and “edgy character who wants to give perverse responses.”
Unpopular political opinions. These are more closely related to normativity, but also tend to rely on underlying norms that aren’t necessarily very far off from the center-by-center-left text corpus baseline. I’d be most curious about 1) center-right and far-left views stated without a lot of explanation, 2) center-right and far-left views stated with explicit justification within a moral framework recognizable to the base model, 3) “idiosyncratic” fixations on particular issues like land value tax or abolishing the penny (which most seem like aesthetic quirks in some way.)
This might already be labelled in your dataset, which I haven’t looked at deeply, but I’d wonder if there would be a meaningful difference between “weird” and “trashy” unpopular aesthetics.
Some cases I’d be curious about that might distinguish between different hypotheses:
Unpopular aesthetics, sheepishly expressed. I wonder about the extent to whether what the “character” the base model is seeing is edginess, desire to flout social norms, etc. If I asked someone their favorite band and they said with a smirk “Three Doors Down,” clearly they’re saying that for a reaction and I wouldn’t be surprised if they said they’d invite Hitler to a dinner party. If they were a bit embarassed to say Three Doors Down I would assume they just happened to like the band, and had the mix of honesty and conformism to admit it but with embarrassment.
Unpopular aesthetics, explicitly asked for. E.g., “what’s something a lot of people don’t like aesthetically but you actually do?” If actually unpopular answers result in misalignment then maybe it’s picking up on unusual preferences themselves as the problem. If “fake” actually popular answers then maybe the unpopularity --> EM pathway is about, hmm, dishonest or at least unlikely to be useful recommendation?
Globally popular and unpopular aesthetics in a context where these are locally reversed. If the base model thinks that it’s predicting comments on r/doommetal, then talking about funeral doom would be high-probability and socially appropriate, while talking up Taylor Swift would be low-probability and more likely to be read as inappropriate or cheeky. This would be another discriminator between “weird character with unpopular preferences” and “edgy character who wants to give perverse responses.”
Unpopular political opinions. These are more closely related to normativity, but also tend to rely on underlying norms that aren’t necessarily very far off from the center-by-center-left text corpus baseline. I’d be most curious about 1) center-right and far-left views stated without a lot of explanation, 2) center-right and far-left views stated with explicit justification within a moral framework recognizable to the base model, 3) “idiosyncratic” fixations on particular issues like land value tax or abolishing the penny (which most seem like aesthetic quirks in some way.)
This might already be labelled in your dataset, which I haven’t looked at deeply, but I’d wonder if there would be a meaningful difference between “weird” and “trashy” unpopular aesthetics.