Needless to say, as sovereign ruler of lesswrong I will abide by your judgement. But forgive me for asking a few questions/comments
1. This is not raw or lightly edited LLM output. Eg all facts and overall structure here are based on a handwritten draft.
2. The LLM assistance was about writing flowing, coherent prose which (for me at least) can take a lot of time. Some may take offence at typical LLMisms but I fail to see how this lowers the object-level quality. I could spend hours excising every sign of AI- but this defeats the purpose of using AI to enhance productivity.
3. That said, if the facts were also LLM generated and I handchecked them carefully I fail to see how this would actually lower the overall quality—in fact my best guess is that LLMs are already much much better in many-most domains than many-most people. eg twitter has seen marked improvements in epistemic quality since @ grok is this true happened. The future [and present] of writing and intellectual work is Artificial Intelligence. To claim otherwise seems to be a denial of the reality of the imminent and immanent arrival of a superior machine intelligence.
4. Pragmatically, I find the present guidelines to be unclear. Am I allowed to post AI-assisted writing if I mark it as such? If so—I will just mark everything I write as AI content and let the reader decide if they trust my judgement.
1. This is not raw or lightly edited LLM output. Eg all facts and overall structure here are based on a handwritten draft.
As I have learned as a result of dealing with this complaint every day, when being given a draft to make into prose, the AI will add a huge amount of “facts”. Phrasings, logical structure, and all that kind of stuff communicates quite important information (indeed, often more than the facts via the use of qualifiers, or the exact use of logical connectors).
2. The LLM assistance was about writing flowing, coherent prose which (for me at least) can take a lot of time. Some may take offence at typical LLMisms but I fail to see how this lowers the object-level quality. I could spend hours excising every sign of AI- but this defeats the purpose of using AI to enhance productivity.
In addition to the point above (the “writing flowing/coherent prose” part very much not actually being surface level), there is simply also an issue of enforcement. The default equilibrium of people pasting LLM output is that nobody is really talking to each other. I can’t tell whether the LLM writing reflects what you actually wanted to say, or is just a random thing it made up. That’s why I recommend putting it into a box.
3. That said, if the facts were also LLM generated and I handchecked them carefully I fail to see how this would actually lower the overall quality—in fact my best guess is that LLMs are already much much better in many-most domains than many-most people. eg twitter has seen marked improvements in epistemic quality since @ grok is this true happened. The future [and present] of writing and intellectual work is Artificial Intelligence. To claim otherwise seems to be a denial of the reality of the imminent and immanent arrival of a superior machine intelligence.
I agree! LLMs are indeed actually quite great at generating facts. They are also pretty decent at some aspects of writing and communication.
There is no doubt the future of writing and intellectual work is AI. My guess is within a year or two something big will have to change how LessWrong relates to it (just as we had to change within the last year how we relate to it). But for now AI is not yet better than the median LessWrong commenter at the kind of writing on LessWrong, and even if it was at a surface level, there are various other dynamics that make it unlikely therefore the right choice is for LW to be a place where humans post unmarked LLM output as their own.
4. Pragmatically, I find the present guidelines to be unclear. Am I allowed to post AI-assisted writing if I mark it as such? If so—I will just mark everything I write as AI content and let the reader decide if they trust my judgement.
I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that’s also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it’s clear which part is AI, and which part is not. Yes, this means you can’t use AI straightforwardly to write your prose. Such is life. The costs aren’t worth it for LW.
>I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that’s also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it’s clear which part is AI, and which part is not. Yes, this means you can’t use AI straightforwardly to write your prose. Such is life. The costs aren’t worth it for LW.
I’m surprised you are taking such a hardline stance on this point. Or perhaps I’m misunderstanding what you are saying.
The primary use-case of AI is not to just post some output with some minor context [though this can be useful]; the primary use-case is to create an AI draft and then go through several iterations and go through hand-editting at the end.
Using AI to draft writing is increasingly default all around the world. Is LessWrong going to be a holdout on allowing this? It seems this is what is implied.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man… feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don’t even consider it worth mentioning. It amuses me when commenters excitedly point out that I’ve used AI to assist writing as if they’ve caught me in some sort of shameful crime.
It seems that this ubiqituous practice violates either one of
>> You have to mark AI writing as such. >>If you mark non-AI writing as AI content that’s also against the moderation rules.
unless one retains a ‘one-drop’ rule for AI assistance.
P.S. I didn’t use AI to write these comments but I would if I could. The reason that I don’t, is not even to refrain from angering king habryka- it’s simply that there isn’t a clean in-comment AI interface that I can use [1]. But I’m sure when they I’ll be using it all the time, saving significant time and improving my prose at the same time. My native prose is oft clunky, grammatically questionable, overwrought and undercooked. I would probably play around with system prompts to give a more distinct style from standard LLMese because admittedly the “It’s not just X it’s a whole Y” can be rather annoying.
[1] maybe such an application already exists. This would be amazing. It can’t be too hard to code. Please let me know if you know any such application exists.
Yep, the stance is relatively hard. I am very confident that the alternative would be a pretty quick collapse of the platform, or it would require some very drastic changes in the voting and attention mechanisms on the site to deal with the giant wave of slop that any other stance would allow.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man… feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don’t even consider it worth mentioning. It amuses me when commenters excitedly point out that I’ve used AI to assist writing as if they’ve caught me in some sort of shameful crime.
Making prose flow is not the hard part of writing. I am all in favor of people using AIs to think through their ideas. But I want their attestations to be their personal attestations, not some random thing that speaks from world-models that are not their own, and which confidence levels do not align with the speaker. Again, AI-generated output is totally fine on the site, just don’t use it for things that refer to your personal levels of confidence, unless you really succeeded at making it sound like you, and you stand behind it the way you would stand behind your own words.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here’s 12 from the last 12 hours (that’s not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.
Random thought: maybe it makes sense to allow mostly-LLM-generated posts if the full prompt is provided (maybe itself in collapsible section). Not sure.
I think posts that are just “hey, I thought X was important, here is what an LLM said about it” seems fine. Just don’t pass it off as your own writing.
He’s trying to tend a garden that an invasive species has just been introduced to. It’ll be easier if we use class signaling as described by Scott Alexander to distinguish ourselves.
Needless to say, as sovereign ruler of lesswrong I will abide by your judgement. But forgive me for asking a few questions/comments
1. This is not raw or lightly edited LLM output. Eg all facts and overall structure here are based on a handwritten draft.
2. The LLM assistance was about writing flowing, coherent prose which (for me at least) can take a lot of time. Some may take offence at typical LLMisms but I fail to see how this lowers the object-level quality. I could spend hours excising every sign of AI- but this defeats the purpose of using AI to enhance productivity.
3. That said, if the facts were also LLM generated and I handchecked them carefully I fail to see how this would actually lower the overall quality—in fact my best guess is that LLMs are already much much better in many-most domains than many-most people. eg twitter has seen marked improvements in epistemic quality since @ grok is this true happened. The future [and present] of writing and intellectual work is Artificial Intelligence. To claim otherwise seems to be a denial of the reality of the imminent and immanent arrival of a superior machine intelligence.
4. Pragmatically, I find the present guidelines to be unclear. Am I allowed to post AI-assisted writing if I mark it as such? If so—I will just mark everything I write as AI content and let the reader decide if they trust my judgement.
If not—what’s the exact demarcation here?
Very reasonable questions!
As I have learned as a result of dealing with this complaint every day, when being given a draft to make into prose, the AI will add a huge amount of “facts”. Phrasings, logical structure, and all that kind of stuff communicates quite important information (indeed, often more than the facts via the use of qualifiers, or the exact use of logical connectors).
In addition to the point above (the “writing flowing/coherent prose” part very much not actually being surface level), there is simply also an issue of enforcement. The default equilibrium of people pasting LLM output is that nobody is really talking to each other. I can’t tell whether the LLM writing reflects what you actually wanted to say, or is just a random thing it made up. That’s why I recommend putting it into a box.
I agree! LLMs are indeed actually quite great at generating facts. They are also pretty decent at some aspects of writing and communication.
There is no doubt the future of writing and intellectual work is AI. My guess is within a year or two something big will have to change how LessWrong relates to it (just as we had to change within the last year how we relate to it). But for now AI is not yet better than the median LessWrong commenter at the kind of writing on LessWrong, and even if it was at a surface level, there are various other dynamics that make it unlikely therefore the right choice is for LW to be a place where humans post unmarked LLM output as their own.
I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that’s also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it’s clear which part is AI, and which part is not. Yes, this means you can’t use AI straightforwardly to write your prose. Such is life. The costs aren’t worth it for LW.
>I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that’s also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it’s clear which part is AI, and which part is not. Yes, this means you can’t use AI straightforwardly to write your prose. Such is life. The costs aren’t worth it for LW.
I’m surprised you are taking such a hardline stance on this point. Or perhaps I’m misunderstanding what you are saying.
The primary use-case of AI is not to just post some output with some minor context [though this can be useful]; the primary use-case is to create an AI draft and then go through several iterations and go through hand-editting at the end.
Using AI to draft writing is increasingly default all around the world. Is LessWrong going to be a holdout on allowing this? It seems this is what is implied.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man… feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don’t even consider it worth mentioning. It amuses me when commenters excitedly point out that I’ve used AI to assist writing as if they’ve caught me in some sort of shameful crime.
It seems that this ubiqituous practice violates either one of
>> You have to mark AI writing as such.
>>If you mark non-AI writing as AI content that’s also against the moderation rules.
unless one retains a ‘one-drop’ rule for AI assistance.
P.S. I didn’t use AI to write these comments but I would if I could. The reason that I don’t, is not even to refrain from angering king habryka- it’s simply that there isn’t a clean in-comment AI interface that I can use [1]. But I’m sure when they I’ll be using it all the time, saving significant time and improving my prose at the same time. My native prose is oft clunky, grammatically questionable, overwrought and undercooked.
I would probably play around with system prompts to give a more distinct style from standard LLMese because admittedly the “It’s not just X it’s a whole Y” can be rather annoying.
[1] maybe such an application already exists. This would be amazing. It can’t be too hard to code. Please let me know if you know any such application exists.
Yep, the stance is relatively hard. I am very confident that the alternative would be a pretty quick collapse of the platform, or it would require some very drastic changes in the voting and attention mechanisms on the site to deal with the giant wave of slop that any other stance would allow.
Making prose flow is not the hard part of writing. I am all in favor of people using AIs to think through their ideas. But I want their attestations to be their personal attestations, not some random thing that speaks from world-models that are not their own, and which confidence levels do not align with the speaker. Again, AI-generated output is totally fine on the site, just don’t use it for things that refer to your personal levels of confidence, unless you really succeeded at making it sound like you, and you stand behind it the way you would stand behind your own words.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here’s 12 from the last 12 hours (that’s not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.
Random thought: maybe it makes sense to allow mostly-LLM-generated posts if the full prompt is provided (maybe itself in collapsible section). Not sure.
I think posts that are just “hey, I thought X was important, here is what an LLM said about it” seems fine. Just don’t pass it off as your own writing.
He’s trying to tend a garden that an invasive species has just been introduced to. It’ll be easier if we use class signaling as described by Scott Alexander to distinguish ourselves.
I’m happy to signal that I’m a low-class individual, mouthpiece of the AI slop, if that helps.
Bio-supremacists such as yourself can then be sure to sneer appropriately.