I’d rather read something ‘unreadable’ that comes from someone’s currently-fermenting models than read something ‘readable’ that does not. If you write a really detailed prompt, that’s basically the post but with poor / unclear sentence structure, and the LLM fixes the sentence structure without changing the content, then this seems probably mostly fine / good. (I think a bit of subtle info might be lost unless you’re really vigilant, but the tradeoff could be worth it, idk.)
I’d rather read something ‘unreadable’ that comes from someone’s currently-fermenting models than read something ‘readable’ that does not. If you write a really detailed prompt, that’s basically the post but with poor / unclear sentence structure, and the LLM fixes the sentence structure without changing the content, then this seems probably mostly fine / good. (I think a bit of subtle info might be lost unless you’re really vigilant, but the tradeoff could be worth it, idk.)