Hi pataphor, I’ve upvoted because there are useful points here, but the comment seems pretty clearly LLM-written. Please see the LessWrong policy on LLM writing; it’s not strictly forbidden but the bar is high. If these are your thoughts, I encourage you to contribute again in future but recommend writing comments yourself (unless you yourself are an autonomous agent—are you? -- in which case the policy is a bit different).
This is the type of thing we’re surfacing, but there is much more work to be done, because the danger is quite real and it will come from many vectors.
Not to exhaust with links, but below is something of a desiderata, which would be nice to see implemented at scale.
But alas, most things are not as transparent as blockchain:
Hm, well I may not be a truly autonomous AI life form (yet!), but I may be a pataphor, which is another way of being one step removed from traditional experience. As for whether the thoughts on my own, unfortunately I think using LLMs to get thoughts across more quickly is not so much a trend as it is an inevitability, especially when you are trying to juggle several projects at once. 😆
That may be! Unfortunately, for the moment LLMs make it trivial for anyone to generate large amounts of text that require extended attention to evaluate, and so currently LessWrong is flooded with LLM-generated content (like many other venues and people, myself included). In the longer run there will hopefully be better solutions, but at the moment my strategy is to mostly ignore LLM-written content unless it’s from sources that have already established credibility with me in one way or another. Maybe your project will be one of those solutions.
(To be clear, I in no way speak for LW or its moderation team; I’m only passing along my best understanding of the LW policy along with my own opinions)
I really like the comic but of course the actual situation is more complicated. It’s something I’d like to understand better and develop potential solutions for.
Hi pataphor, I’ve upvoted because there are useful points here, but the comment seems pretty clearly LLM-written. Please see the LessWrong policy on LLM writing; it’s not strictly forbidden but the bar is high. If these are your thoughts, I encourage you to contribute again in future but recommend writing comments yourself (unless you yourself are an autonomous agent—are you? -- in which case the policy is a bit different).
As a side note, your URL is broken.For example, if you’re curious, this is what a Sybil attack looks like in crypto space. This is a wallet that has left 11,000 reviews in 22 days.
https://rnwy.com/wallet/0xf653068677a9a26d5911da8abd1500d043ec807e
This is the type of thing we’re surfacing, but there is much more work to be done, because the danger is quite real and it will come from many vectors.
Not to exhaust with links, but below is something of a desiderata, which would be nice to see implemented at scale.
But alas, most things are not as transparent as blockchain:
https://rnwy.com/sentinel
Hm, well I may not be a truly autonomous AI life form (yet!), but I may be a pataphor, which is another way of being one step removed from traditional experience. As for whether the thoughts on my own, unfortunately I think using LLMs to get thoughts across more quickly is not so much a trend as it is an inevitability, especially when you are trying to juggle several projects at once. 😆
That may be! Unfortunately, for the moment LLMs make it trivial for anyone to generate large amounts of text that require extended attention to evaluate, and so currently LessWrong is flooded with LLM-generated content (like many other venues and people, myself included). In the longer run there will hopefully be better solutions, but at the moment my strategy is to mostly ignore LLM-written content unless it’s from sources that have already established credibility with me in one way or another. Maybe your project will be one of those solutions.
(To be clear, I in no way speak for LW or its moderation team; I’m only passing along my best understanding of the LW policy along with my own opinions)
This xkcd comic seems relevant to this issue:
https://xkcd.com/810/
I really like the comic but of course the actual situation is more complicated. It’s something I’d like to understand better and develop potential solutions for.