>I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that’s also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it’s clear which part is AI, and which part is not. Yes, this means you can’t use AI straightforwardly to write your prose. Such is life. The costs aren’t worth it for LW.
I’m surprised you are taking such a hardline stance on this point. Or perhaps I’m misunderstanding what you are saying.
The primary use-case of AI is not to just post some output with some minor context [though this can be useful]; the primary use-case is to create an AI draft and then go through several iterations and go through hand-editting at the end.
Using AI to draft writing is increasingly default all around the world. Is LessWrong going to be a holdout on allowing this? It seems this is what is implied.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man… feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don’t even consider it worth mentioning. It amuses me when commenters excitedly point out that I’ve used AI to assist writing as if they’ve caught me in some sort of shameful crime.
It seems that this ubiqituous practice violates either one of
>> You have to mark AI writing as such. >>If you mark non-AI writing as AI content that’s also against the moderation rules.
unless one retains a ‘one-drop’ rule for AI assistance.
P.S. I didn’t use AI to write these comments but I would if I could. The reason that I don’t, is not even to refrain from angering king habryka- it’s simply that there isn’t a clean in-comment AI interface that I can use [1]. But I’m sure when they I’ll be using it all the time, saving significant time and improving my prose at the same time. My native prose is oft clunky, grammatically questionable, overwrought and undercooked. I would probably play around with system prompts to give a more distinct style from standard LLMese because admittedly the “It’s not just X it’s a whole Y” can be rather annoying.
[1] maybe such an application already exists. This would be amazing. It can’t be too hard to code. Please let me know if you know any such application exists.
Yep, the stance is relatively hard. I am very confident that the alternative would be a pretty quick collapse of the platform, or it would require some very drastic changes in the voting and attention mechanisms on the site to deal with the giant wave of slop that any other stance would allow.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man… feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don’t even consider it worth mentioning. It amuses me when commenters excitedly point out that I’ve used AI to assist writing as if they’ve caught me in some sort of shameful crime.
Making prose flow is not the hard part of writing. I am all in favor of people using AIs to think through their ideas. But I want their attestations to be their personal attestations, not some random thing that speaks from world-models that are not their own, and which confidence levels do not align with the speaker. Again, AI-generated output is totally fine on the site, just don’t use it for things that refer to your personal levels of confidence, unless you really succeeded at making it sound like you, and you stand behind it the way you would stand behind your own words.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here’s 12 from the last 12 hours (that’s not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.
>I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that’s also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it’s clear which part is AI, and which part is not. Yes, this means you can’t use AI straightforwardly to write your prose. Such is life. The costs aren’t worth it for LW.
I’m surprised you are taking such a hardline stance on this point. Or perhaps I’m misunderstanding what you are saying.
The primary use-case of AI is not to just post some output with some minor context [though this can be useful]; the primary use-case is to create an AI draft and then go through several iterations and go through hand-editting at the end.
Using AI to draft writing is increasingly default all around the world. Is LessWrong going to be a holdout on allowing this? It seems this is what is implied.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man… feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don’t even consider it worth mentioning. It amuses me when commenters excitedly point out that I’ve used AI to assist writing as if they’ve caught me in some sort of shameful crime.
It seems that this ubiqituous practice violates either one of
>> You have to mark AI writing as such.
>>If you mark non-AI writing as AI content that’s also against the moderation rules.
unless one retains a ‘one-drop’ rule for AI assistance.
P.S. I didn’t use AI to write these comments but I would if I could. The reason that I don’t, is not even to refrain from angering king habryka- it’s simply that there isn’t a clean in-comment AI interface that I can use [1]. But I’m sure when they I’ll be using it all the time, saving significant time and improving my prose at the same time. My native prose is oft clunky, grammatically questionable, overwrought and undercooked.
I would probably play around with system prompts to give a more distinct style from standard LLMese because admittedly the “It’s not just X it’s a whole Y” can be rather annoying.
[1] maybe such an application already exists. This would be amazing. It can’t be too hard to code. Please let me know if you know any such application exists.
Yep, the stance is relatively hard. I am very confident that the alternative would be a pretty quick collapse of the platform, or it would require some very drastic changes in the voting and attention mechanisms on the site to deal with the giant wave of slop that any other stance would allow.
Making prose flow is not the hard part of writing. I am all in favor of people using AIs to think through their ideas. But I want their attestations to be their personal attestations, not some random thing that speaks from world-models that are not their own, and which confidence levels do not align with the speaker. Again, AI-generated output is totally fine on the site, just don’t use it for things that refer to your personal levels of confidence, unless you really succeeded at making it sound like you, and you stand behind it the way you would stand behind your own words.
We are drowning in this stuff. If you want you can go through the dozen-a-day posts we get obviously written by AI, and proposed we (instead of spending 5-15 mins a day skimming and quickly rejecting them) spend as many hours as it takes to read and evaluate the content and the ideas to figure out which are bogus/slop/crackpot and which have any merit to them. Here’s 12 from the last 12 hours (that’s not all that we got, to be clear): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. Interested in you taking a look.