I don’t know what you mean by “the prompt, in a significant sense, is the post”. When I ask ChatGPT “What are some historical examples of mediation ending major conflicts?” that is really very different information content than the detailed list of 10 examples it gives me back.
It’s a shame language model decoding isn’t deterministic, or I could make a snarky but unhelpful comment that the information content is provably identical, by some sort of pigeon hole argument.
If the only thing you provide as a post is that question, then it’s a very, very short post! If you have a substantial claim to make, and you write it as a prompt but it’s badly formatted or missing detail, then that’s the post. The post is effectively “hey, I think asking this prompt is a good idea. Here’s an output.” For complex prompts, that may be enough. It may even be better to prompt a human. For example, we have question posts!
For example, I could copy and paste this message thread over to Claude, and provide a collapseable section; but as is, we mostly know what Claude would probably say. (well, come to think of it, conceivably you don’t, if you only use ChatGPT and their responses differ significantly on this topic. Doubtful for this topic, but it does happen.)
I don’t know what you mean by “the prompt, in a significant sense, is the post”. When I ask ChatGPT “What are some historical examples of mediation ending major conflicts?” that is really very different information content than the detailed list of 10 examples it gives me back.
It’s a shame language model decoding isn’t deterministic, or I could make a snarky but unhelpful comment that the information content is provably identical, by some sort of pigeon hole argument.
The v-information content is clearly increased, though.
If the only thing you provide as a post is that question, then it’s a very, very short post! If you have a substantial claim to make, and you write it as a prompt but it’s badly formatted or missing detail, then that’s the post. The post is effectively “hey, I think asking this prompt is a good idea. Here’s an output.” For complex prompts, that may be enough. It may even be better to prompt a human. For example, we have question posts!
For example, I could copy and paste this message thread over to Claude, and provide a collapseable section; but as is, we mostly know what Claude would probably say. (well, come to think of it, conceivably you don’t, if you only use ChatGPT and their responses differ significantly on this topic. Doubtful for this topic, but it does happen.)