Sounds interesting, I talk to LLMs quite a bit as well, I’m interested in any tricks you’ve picked up. I put quite a lot of effort into pushing them to be concise and grounded.
eg, I think an LLM bot designed by me would only get banned for being an LLM, despite consistently having useful things to say when writing comments—which, relatedly, would probably not happen super often, despite the AI reading a lot of posts and comments—it would be mostly showing up in threads where someone said something that seemed to need a specific kind of asking them for clarification, and I’d be doing prompt design for the goal of making the AI itself be evaluating its few and very short comments against a high bar of postability.
I also think a very well designed summarizer prompt would be useful to build directly into the site, mostly because otherwise it’s a bunch of work to summarize each post before reading it—I often am frustrated that there isn’t a built-in overview of a post, ideally one line on the homepage, a few lines at the top of each post. Posts where the author writes a title which accurately describes post contents and an overview at the top are great but rare(r than I’d prefer they be); the issue is that pasting a post and asking for an overview typically gets awful results. My favorite trick for asking for overviews is “Very heavily prefer direct quotes any time possible.” also, call it compression, not summarization, for a few reasons—unsure how long those concepts will be different, but usually what I want is more like the former, in places where the concepts differ.
However, given the culture on the site, I currently feel like I’m going to get disapproval for even suggesting this. Eg,
if I wanted an LLM output, I would ask it myself
There are circumstances where I don’t think this is accurate, in ways beyond just “that’s a lot of asking, though!”—I would typically want to ask an LLM to help me enumerate a bunch of ways to put something, and then I’d pick the ones that seem promising. I would only paste highly densified LLM writing. It would be appreciated if it were to become culturally unambiguous that the problem is shitty, default-LLM-foolishness, low-density, high-fluff writing, rather than simply “the words came from an LLM”.
I often read things, here and elsewhere, where my reaction is “you don’t dislike the way LLMs currently write enough, and I have no idea if this line came from an LLM but if it didn’t that’s actually much worse”.
Sounds interesting, I talk to LLMs quite a bit as well, I’m interested in any tricks you’ve picked up. I put quite a lot of effort into pushing them to be concise and grounded.
eg, I think an LLM bot designed by me would only get banned for being an LLM, despite consistently having useful things to say when writing comments—which, relatedly, would probably not happen super often, despite the AI reading a lot of posts and comments—it would be mostly showing up in threads where someone said something that seemed to need a specific kind of asking them for clarification, and I’d be doing prompt design for the goal of making the AI itself be evaluating its few and very short comments against a high bar of postability.
I also think a very well designed summarizer prompt would be useful to build directly into the site, mostly because otherwise it’s a bunch of work to summarize each post before reading it—I often am frustrated that there isn’t a built-in overview of a post, ideally one line on the homepage, a few lines at the top of each post. Posts where the author writes a title which accurately describes post contents and an overview at the top are great but rare(r than I’d prefer they be); the issue is that pasting a post and asking for an overview typically gets awful results. My favorite trick for asking for overviews is “Very heavily prefer direct quotes any time possible.” also, call it compression, not summarization, for a few reasons—unsure how long those concepts will be different, but usually what I want is more like the former, in places where the concepts differ.
However, given the culture on the site, I currently feel like I’m going to get disapproval for even suggesting this. Eg,
There are circumstances where I don’t think this is accurate, in ways beyond just “that’s a lot of asking, though!”—I would typically want to ask an LLM to help me enumerate a bunch of ways to put something, and then I’d pick the ones that seem promising. I would only paste highly densified LLM writing. It would be appreciated if it were to become culturally unambiguous that the problem is shitty, default-LLM-foolishness, low-density, high-fluff writing, rather than simply “the words came from an LLM”.
I often read things, here and elsewhere, where my reaction is “you don’t dislike the way LLMs currently write enough, and I have no idea if this line came from an LLM but if it didn’t that’s actually much worse”.