I’m just gonna copy-paste my comment from yesterday’s discussion, so that people have concrete examples of what we’re dealing with here.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here’s 12 from the last 12 hours (that’s not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.
I don’t request changing the acceptance thresholds or automated systems. I do think it would be dramatically easier to recognize a good prompt than to recognize a good output; the thrust of my view is that the prompt, in a significant sense, is the post. Also, I mostly interpret this as already nearly but not quite being the policy, and very little would need to change to make the world I’m imagining happen. I’m mostly interested in high-end posts from expert users; there have been AI-generated-and-edited posts like that, and those are the ones I think should be willing and allowed to be up front about it, rather than having to skirt under the rules.
For the record, I’ve spent time reading the rejected posts section, and so my original shortform was written with that experience in mind.
Gotcha. To be clear I didn’t read you as requesting a change; this was written primarily for “all the readers” to have more contact with reality, than to challenge anything you wrote.
I don’t know what you mean by “the prompt, in a significant sense, is the post”. When I ask ChatGPT “What are some historical examples of mediation ending major conflicts?” that is really very different information content than the detailed list of 10 examples it gives me back.
It’s a shame language model decoding isn’t deterministic, or I could make a snarky but unhelpful comment that the information content is provably identical, by some sort of pigeon hole argument.
If the only thing you provide as a post is that question, then it’s a very, very short post! If you have a substantial claim to make, and you write it as a prompt but it’s badly formatted or missing detail, then that’s the post. The post is effectively “hey, I think asking this prompt is a good idea. Here’s an output.” For complex prompts, that may be enough. It may even be better to prompt a human. For example, we have question posts!
For example, I could copy and paste this message thread over to Claude, and provide a collapseable section; but as is, we mostly know what Claude would probably say. (well, come to think of it, conceivably you don’t, if you only use ChatGPT and their responses differ significantly on this topic. Doubtful for this topic, but it does happen.)
I’m just gonna copy-paste my comment from yesterday’s discussion, so that people have concrete examples of what we’re dealing with here.
I don’t request changing the acceptance thresholds or automated systems. I do think it would be dramatically easier to recognize a good prompt than to recognize a good output; the thrust of my view is that the prompt, in a significant sense, is the post. Also, I mostly interpret this as already nearly but not quite being the policy, and very little would need to change to make the world I’m imagining happen. I’m mostly interested in high-end posts from expert users; there have been AI-generated-and-edited posts like that, and those are the ones I think should be willing and allowed to be up front about it, rather than having to skirt under the rules.
For the record, I’ve spent time reading the rejected posts section, and so my original shortform was written with that experience in mind.
Gotcha. To be clear I didn’t read you as requesting a change; this was written primarily for “all the readers” to have more contact with reality, than to challenge anything you wrote.
I don’t know what you mean by “the prompt, in a significant sense, is the post”. When I ask ChatGPT “What are some historical examples of mediation ending major conflicts?” that is really very different information content than the detailed list of 10 examples it gives me back.
It’s a shame language model decoding isn’t deterministic, or I could make a snarky but unhelpful comment that the information content is provably identical, by some sort of pigeon hole argument.
The v-information content is clearly increased, though.
If the only thing you provide as a post is that question, then it’s a very, very short post! If you have a substantial claim to make, and you write it as a prompt but it’s badly formatted or missing detail, then that’s the post. The post is effectively “hey, I think asking this prompt is a good idea. Here’s an output.” For complex prompts, that may be enough. It may even be better to prompt a human. For example, we have question posts!
For example, I could copy and paste this message thread over to Claude, and provide a collapseable section; but as is, we mostly know what Claude would probably say. (well, come to think of it, conceivably you don’t, if you only use ChatGPT and their responses differ significantly on this topic. Doubtful for this topic, but it does happen.)