If the LLM text contains surprising stuff, and you DID thoroughly investigate for yourself, then you obviously can write something much better and more interesting.
This is false. Dressing up text to be readable is a separate skill not everyone has.
I’d rather read something ‘unreadable’ that comes from someone’s currently-fermenting models than read something ‘readable’ that does not. If you write a really detailed prompt, that’s basically the post but with poor / unclear sentence structure, and the LLM fixes the sentence structure without changing the content, then this seems probably mostly fine / good. (I think a bit of subtle info might be lost unless you’re really vigilant, but the tradeoff could be worth it, idk.)
This is false. Dressing up text to be readable is a separate skill not everyone has.
I’d rather read something ‘unreadable’ that comes from someone’s currently-fermenting models than read something ‘readable’ that does not. If you write a really detailed prompt, that’s basically the post but with poor / unclear sentence structure, and the LLM fixes the sentence structure without changing the content, then this seems probably mostly fine / good. (I think a bit of subtle info might be lost unless you’re really vigilant, but the tradeoff could be worth it, idk.)