I’m pretty sure this isn’t a policy change but rather a policy distillation, and you were operating under the policy described above already. eg, I often have conversations with AIs that I don’t want to bother to translate into a whole post, but where I think folks here would benefit from seeing the thread. what I’ll likely do is make the AI portions collapsible and the human portions default uncollapsed; often the human side is sufficient to make a point (when the conversation is basically just a human thinking out loud with some helpful feedback), but sometimes the AI responses provide significant insight not otherwise present that doesn’t get represented in subsequent human message (eg, when asking the AI to do a significant amount of thinking before responding).
I’m pretty sure this isn’t a policy change but rather a policy distillation, and you were operating under the policy described above already. eg, I often have conversations with AIs that I don’t want to bother to translate into a whole post, but where I think folks here would benefit from seeing the thread. what I’ll likely do is make the AI portions collapsible and the human portions default uncollapsed; often the human side is sufficient to make a point (when the conversation is basically just a human thinking out loud with some helpful feedback), but sometimes the AI responses provide significant insight not otherwise present that doesn’t get represented in subsequent human message (eg, when asking the AI to do a significant amount of thinking before responding).
I’m not a moderator, but I predict your comment was and is allowed by this policy, because of #Humans_Using_AI_as_Writing_or_Research_Assistants.