I want an option to filter for writing with zero LLM influence.
I do not trust LLMs and I am not sure how I feel about LLM / human collaboration. As systems become more powerful, I am worried that they may become a serious epistemic hazard, up to and including actually hacking my brain. I would like to be able to protect myself from this aggressively.
For that reason, I think that the current LW policy on LLM usage is insufficient. Every post that uses an LLM in any part of its production process whatsover should be flagged as such. Personally, I am currently willing to accept some LLM usage upstream of the writing I read and I would not routinely filter such posts out of my feed, but I would like the option to do so (which I would occasionally use as a sanity check) very aggressively and with no exceptions. Basically, an off-switch.
I would also like to be able to filter out any writing of which even a single word is LLM generated (except perhaps parenthetically). I think I would use this option routinely, though perhaps I would also like to exempt specific users (e.g. authors I have followed). But this softer option should allow consultation with LLMs, experiments with LLMs, etc.
I consider it epistemic violence that I occasionally discover, after the fact, that an LLM was used extensively in the writing process of a post.
I think extensive use of LLM should be flagged at the beginning of a post, but “uses an LLM in any part of its production process whatsoever” would probably result in the majority of posts being flagged and make the flag useless for filtering. For example I routinely use LLMs to check my posts for errors (that the LLM can detect), and I imagine most other people do so as well (or should, if they don’t already).
Unfortunately this kind of self flagging/reporting is ultimately not going to work, as far as individually or societally protecting against AI-powered manipulation, and I doubt there will be a technical solution (e.g. AI content detector or other kind of defense) either (short of solving metaphilosophy). I’m not sure it will do more good than harm even in the short run because it can give a false sense of security and punish the honest / reward the dishonest, but still lean towards trying to establish “extensive use of LLM should be flagged at the beginning of a post” as a norm.
“uses an LLM in any part of its production process whatsoever” would probably result in the majority of posts being flagged and make the flag useless for filtering. For example I routinely use LLMs to check my posts for errors (that the LLM can detect), and I imagine most other people do so as well (or should, if they don’t already).
My own data point: for the vast majority of my posts, there is zero LLM involved at any stage.
I recently, rather embarrassingly, made a post with a massive error which an LLM would have found immediately. I seriously misread a paper in a way that cut/pasting the paper and the post into Claude and asking “any egregious misreadings” would have stopped me from making that post. This is far too useful for me to turn down, and this kind of due diligence is +EV for everyone.
Yes mostly agree. Unless the providers themselves log all responses and expose some API to check for LLM generation, we’re probably out of luck here, and incentives are strong to defect.
One thing I was thinking about (similar to i.e—speedrunners) is just making a self-recording or screenrecording of actually writing out the content / post? This probably can be verified by an AI or neutral third party. Something like a “proof of work” for writing your own content.
If it became common to demand and check proofs of (human) work, there will be a strong incentive to use AI to generate such proofs, which doesn’t not seem very hard to do.
Maybe we want a multi-level categorization scheme instead? Something like:
Level 0: Author completely abstains from LLM use in all contexts (not just this post) Level 1: Author uses LLMs but this particular post was made with no use of LLM whatsoever Level 2: LLM was used (e.g. to look up information), but no text/images in the post came out of LLM Level 3: LLM was used for light editing and/or image generation Level 4: LLM was used for writing substantial parts Level 5: Mostly LLM-generated with high-level human guidance/control/oversight
This is an edge case, but just flagging that it’s a bit unclear to me how to apply this to my own post in a useful way. As I’ve disclosed in the post itself:
OpenAI’s o3 found the idea for the dovetailing procedure. The proof of the efficient algorithmic Kraft coding in the appendix is mine. The entire post is written by myself, except the last paragraph of the following section, which was first drafted by GPT-5.
Does this count as Level 3 or 4? o3 provided a substantial idea, but the resulting proof was entirely written down by myself. I’m also unsure whether the full drafting of precisely one paragraph (which summarizes the rest of the post) by GPT-5 counts as editing or the writing of substantial parts.
We need another “level” here, probably parallel to the others, for when LLMs are used for idea-generation, criticism of outlines, as a discussion partner et cetera. For instance, let’s say I think about countries that are below their potential in some tragic way, like Russia and Iran, countries with loads of cultural capital, educated population, that historically have lots going for them. Then I can ask an LLM “any other countries like that?” and it might mention, say, North Korea, Iraq and Syria, maybe Greece or Turkey or South Italy, with some plausible story attached to them. When I do this interaction with an LLM the end product is going to be colored by it. If I initially intended to talk about how Russia and Iran have been destroyed by some particular forms of authoritarianism, my presentation, hypothesis, or whatever, will likely be modified so I can put Greece and Iraq into the same bucket. This alters my initial thoughts and probably changes my though-generation process into a mold more-or-less shaped by the LLM, “hacking my brain”. When this happen across many posts, it’s likely to make writing homogenized not through writing style, but semantic content.
This example is kinda weak, but I think this is the kind of thing OP is worried about. But I’d be curious to hear stronger examples if anyone can think of them.
I use LLMs for basically anything substantial that I write. Like, a lot of my knowledge of random facts about the world is downstream of having asked LLMs about it. It would be IMO pretty dumb to write a post that is e.g. trying to learn from past social movement failures and not have an LLM look over it to see whether it’s saying anything historically inaccurate.
So I do think there needs to be some bar here that is not “LLMs were involved in any way”. I do share a bunch of concerns in the space.
Like, a lot of my knowledge of random facts about the world is downstream of having asked LLMs about it.
Uhhh… that seems maybe really bad. Do you sometimes do the kind of check which, if it were applied to The New York Times pre-AI, would be sufficient to make Gell-Mann Amnesia obvious?
Personally, the most I’ve relied on LLMs for a research project was the project behind this shortform in February 2025, and in hindsight (after reading up on some parts more without an LLM) I think I ended up with a very misleading big picture as a result. I no longer use LLMs for open-ended learning like that; it was worth trying but not a good idea in hindsight.
It would be IMO pretty dumb to write a post that is e.g. trying to learn from past social movement failures and not have an LLM look over it to see whether it’s saying anything historically inaccurate.
Do you then look over what the LLM has said and see whether it’s saying anything historically inaccurate, without using an LLM?
Not most of the time! Like, I sometimes ask multiple LLMs, but I don’t verify every fact that an LLM tells me, unless it’s a domain where I predict LLMs are particularly likely to hallucinate. I keep in mind that stuff is sometimes hallucinated, but most of the time it’s fine to know that something is quite probably true.
There’s no such thing as “a domain where LLMs are particularly likely to hallucinate”. In every domain there’s some obscure jagged boundary, not very far from normal standard questions to ask, where LLMs will hallucinate, usually plausibly to a non-expert.
To me, this sounds like you’re simply pushing the problem a little bit downstream without actually addressing it. You’re still not verifying the facts; you’re just getting another system with similar flaws to the first (you). You aren’t actually fact checking at any point.
That is not how bayesian evidence works. I am treating LLM output as somewhat less trustworthy than I would trust what a colleague of mine says, but not fundamentally different. I am skeptical that you spend your days double checking every conversation you have with another human. I also don’t think you should spend your days double checking every single thing an LLM tells you.
This feels kind of like the early conversations about Wikipedia where people kept trying to insist Wikipedia is “not a real source”.
I am treating LLM output as somewhat less trustworthy than I would trust what a colleague of mine says, but not fundamentally different.
If you’re asking a human about some even mildly specialized topic, like history of Spain in the 17th century or different crop rotation methods or ordinary differential equations, and there’s no special reason that they really want to appear like they know what they’re talking about, they’ll generally just say “IDK”. LLMs are much less like that IME. I think this is actually a big difference in practice, at least in the domains I’ve tried (reproductive biology). LLMs routinely give misleading / false / out-of-date / vague-but-deceptively-satiating summaries.
I agree the LLMs are somewhat worse, especially compared to rationalist-adjacent experts in specialized fields, but they really aren’t that bad for most things. Like I researched the state of the art of datacenter security practices yesterday, and I am not like 99% confident that the AI got everything right, but I am pretty sure it helped me understand the rough shape of things a lot better.
This seems fine and good—for laying some foundations, which you can use for your own further theorizing, which will make you ready to learn from more reliable + rich expert sources over time. Then you can report that stuff. If instead you’re directly reporting your immediately-post-LLM models, I currently don’t think I want to read that stuff, or would want a warning. (I’m not necessarily pushing for some big policy, that seems hard. I would push for personal standards though.)
Fwiw, in my experience LLMs lie far more than early Wikipedia or any human I know, and in subtler and harder to detect ways. My spot checks for accuracy have been so dismal/alarming that at this point I basically only use them as search engines to find things humans have said.
I am wondering whether your experiences were formed via the first generation of reasoning models, and my guess is you are also thinking of asking different kinds of questions.
The thing that LLMs are really great at is to speak and think in the ontology and structure that is prevalent among experts in any field. This is usually where the vast majority of evidence comes from. LLMs aren’t going to make up whole ontologies about how bankruptcy law works, or how datacenter security works. It might totally make up details, but it won’t make up the high-level picture.
Second, this has just gotten a lot better over the last 6 months. GPT-5 still lies a good amount, but vastly less than o1 or o3. I found o1 almost unusable on this dimension.
Datapoint: I’m currently setting up a recording studio at Lighthaven, and I am using them all the time to get guides for things like “how to change a setting on this camera” or “how to use this microphone” or “how to use this recording software”.
Yes, they confabulate menus and things a lot, but as long as I keep uploading photos of what I actually see, they know the basics much better than me (e.g. what bit rate to set the video vs the audio, where to look to kill the random white noise input I’m getting, etc).
I’d say they confabulate like 50% of the time but that they’re still a much more effective search engine for me than google, and can read the manual much faster than me. My guess is I simply couldn’t do some of the projects I’m doing without them.
It’s perfectly fine to have strong personal preferences for what content you consume, and how it’s filtered, and to express these preferences. I don’t think it’s cool to make hyperbolic accusations of violence. It erodes the distinctions we make between different levels of hostility that help prevent conflicts from escalating. I don’t think undisclosed LLM assistance can even be fairly characterized as deceptive, much less violent.
I don’t think it’s hyperbolic at all; I think this is in fact a central instance of the category I’m gesturing at as “epistemic violence.” For instance, p-hacking, lying, manipulation, misleading data, etc. If you don’t think that category is meaningful or you dislike my name for it, can you be more specific about why? Or why this is not an instance? Another commenter @Guive objected to my usage of the word violence here because “words can’t be violence” which is I think a small skirmish of a wider culture war which I am really not trying to talk about.
To be explicit (again) I do not in any way want to imply that somehow a person using an LLM without disclosing it justifies physical violence against them. I also don’t think it’s intentionally an aggression. But depending on the case, it CAN BE seriously negligent towards the truth and community truth seeking norms, and in that careless negligence it can damage the epistemics of others, when a simple disclaimer / “epistemic status” / source would have been VERY low effort to add. I have to admit I hesitate to say this so explicitly a bit because many people I respect use LLMs extensively, and I am not categorically against this, and I feel slightly bad about potentially burdening or just insulting them—generally speaking I feel some degree of social pressure against saying this. And as a result I hesitate to back down from my framing, without a better reason than that it feels uncomfortable and some people don’t like it.
Thanks for going into more detail. I don’t think “epistemic violence” is a good term for this category:
Violence generally describes intentional harm, where as p-hacking and misleading data are not always intentional
Violence generally describes harm that meets a certain threshold—flicking someone is technically violent, but it would be hyperbolic to describe it as such without more context.
I think a better term for this broad category might be “epistemic pollution”, as it describes filling the information environment with negative value stuff. I would be comfortable describing e.g. a confidence scheme or an impersonation scam as epistemic violence, although there would have to be some point to doing so.
In general, I’m skeptical of coining a novel term with strong connotations to try to argue a point—it’s basically the noncentral fallacy.
The way I think about this is a bit more like “somehow, we need immune systems against arbitrary nuanced persuasion.” Which is for sure a very hard problem, but, I don’t think simple tricks of “check if LLM influenced” will turn out to be that useful.
I think at the very least you want more metadata about how the AI was used.
Something like “somehow automatically track metadata about how documents came to be and include it”, the way you might try to do with photography. (I guess the metaphor here is more like “have text documents automatically include info about what text entered via “paste” instead of by typing manually?”)
It tends to be bad (or at least costly) to have a rule that has the property that violations of the rule cannot reliably be detected, which leads to the question of how you propose to detect LLM-written content.
You can see the chat here. I prompted Claude with a detailed outline, a previous draft that followed a very different structure, and a copy of “The case for ensuring powerful AIs are controlled” for reference about my writing style. The outline I gave Claude is in the Outline tab, and the old draft I provided is in the Old draft tab, of this doc.
As you can see, I did a bunch of back and forth with Claude to edit it. Then I copied to a Google doc and edited substantially on my own to get to the final product.
And I don’t think he did anything against lesswrong rules, or anything immoral really, but I still really don’t like it.
If it was up to me, we’d have a rule that every single word in your post should either be physically typed by you, or be in quotation marks.
So its fine if you copy your article into some AI and ask it to fix grammar mistakes, as long as you go and fix them yourself.
Its also fine to have a fair bit of LLM involvement in the post, even conceptual stuff and writing, as long as the finished product is typed up by you.
What way I know every single word has at least passed through the brain of the author.
I want an option to filter for writing with zero LLM influence.
I do not trust LLMs and I am not sure how I feel about LLM / human collaboration. As systems become more powerful, I am worried that they may become a serious epistemic hazard, up to and including actually hacking my brain. I would like to be able to protect myself from this aggressively.
For that reason, I think that the current LW policy on LLM usage is insufficient. Every post that uses an LLM in any part of its production process whatsover should be flagged as such. Personally, I am currently willing to accept some LLM usage upstream of the writing I read and I would not routinely filter such posts out of my feed, but I would like the option to do so (which I would occasionally use as a sanity check) very aggressively and with no exceptions. Basically, an off-switch.
I would also like to be able to filter out any writing of which even a single word is LLM generated (except perhaps parenthetically). I think I would use this option routinely, though perhaps I would also like to exempt specific users (e.g. authors I have followed). But this softer option should allow consultation with LLMs, experiments with LLMs, etc.
I consider it epistemic violence that I occasionally discover, after the fact, that an LLM was used extensively in the writing process of a post.
I think extensive use of LLM should be flagged at the beginning of a post, but “uses an LLM in any part of its production process whatsoever” would probably result in the majority of posts being flagged and make the flag useless for filtering. For example I routinely use LLMs to check my posts for errors (that the LLM can detect), and I imagine most other people do so as well (or should, if they don’t already).
Unfortunately this kind of self flagging/reporting is ultimately not going to work, as far as individually or societally protecting against AI-powered manipulation, and I doubt there will be a technical solution (e.g. AI content detector or other kind of defense) either (short of solving metaphilosophy). I’m not sure it will do more good than harm even in the short run because it can give a false sense of security and punish the honest / reward the dishonest, but still lean towards trying to establish “extensive use of LLM should be flagged at the beginning of a post” as a norm.
My own data point: for the vast majority of my posts, there is zero LLM involved at any stage.
I recently, rather embarrassingly, made a post with a massive error which an LLM would have found immediately. I seriously misread a paper in a way that cut/pasting the paper and the post into Claude and asking “any egregious misreadings” would have stopped me from making that post. This is far too useful for me to turn down, and this kind of due diligence is +EV for everyone.
Yes mostly agree. Unless the providers themselves log all responses and expose some API to check for LLM generation, we’re probably out of luck here, and incentives are strong to defect.
One thing I was thinking about (similar to i.e—speedrunners) is just making a self-recording or screenrecording of actually writing out the content / post? This probably can be verified by an AI or neutral third party. Something like a “proof of work” for writing your own content.
Grammarly has https://www.grammarly.com/authorship if you want to prove that you wrote something.
If it became common to demand and check proofs of (human) work, there will be a strong incentive to use AI to generate such proofs, which doesn’t not seem very hard to do.
I don’t expect the people on LW that I read to intentionally lie about stuff.
Maybe we want a multi-level categorization scheme instead? Something like:
Level 0: Author completely abstains from LLM use in all contexts (not just this post)
Level 1: Author uses LLMs but this particular post was made with no use of LLM whatsoever
Level 2: LLM was used (e.g. to look up information), but no text/images in the post came out of LLM
Level 3: LLM was used for light editing and/or image generation
Level 4: LLM was used for writing substantial parts
Level 5: Mostly LLM-generated with high-level human guidance/control/oversight
any reason not to just start doing that as post tags? no convenient way to do it for comments though.
This is an edge case, but just flagging that it’s a bit unclear to me how to apply this to my own post in a useful way. As I’ve disclosed in the post itself:
Does this count as Level 3 or 4? o3 provided a substantial idea, but the resulting proof was entirely written down by myself. I’m also unsure whether the full drafting of precisely one paragraph (which summarizes the rest of the post) by GPT-5 counts as editing or the writing of substantial parts.
We need another “level” here, probably parallel to the others, for when LLMs are used for idea-generation, criticism of outlines, as a discussion partner et cetera. For instance, let’s say I think about countries that are below their potential in some tragic way, like Russia and Iran, countries with loads of cultural capital, educated population, that historically have lots going for them. Then I can ask an LLM “any other countries like that?” and it might mention, say, North Korea, Iraq and Syria, maybe Greece or Turkey or South Italy, with some plausible story attached to them. When I do this interaction with an LLM the end product is going to be colored by it. If I initially intended to talk about how Russia and Iran have been destroyed by some particular forms of authoritarianism, my presentation, hypothesis, or whatever, will likely be modified so I can put Greece and Iraq into the same bucket. This alters my initial thoughts and probably changes my though-generation process into a mold more-or-less shaped by the LLM, “hacking my brain”. When this happen across many posts, it’s likely to make writing homogenized not through writing style, but semantic content.
This example is kinda weak, but I think this is the kind of thing OP is worried about. But I’d be curious to hear stronger examples if anyone can think of them.
I use LLMs for basically anything substantial that I write. Like, a lot of my knowledge of random facts about the world is downstream of having asked LLMs about it. It would be IMO pretty dumb to write a post that is e.g. trying to learn from past social movement failures and not have an LLM look over it to see whether it’s saying anything historically inaccurate.
So I do think there needs to be some bar here that is not “LLMs were involved in any way”. I do share a bunch of concerns in the space.
Uhhh… that seems maybe really bad. Do you sometimes do the kind of check which, if it were applied to The New York Times pre-AI, would be sufficient to make Gell-Mann Amnesia obvious?
Personally, the most I’ve relied on LLMs for a research project was the project behind this shortform in February 2025, and in hindsight (after reading up on some parts more without an LLM) I think I ended up with a very misleading big picture as a result. I no longer use LLMs for open-ended learning like that; it was worth trying but not a good idea in hindsight.
Do you then look over what the LLM has said and see whether it’s saying anything historically inaccurate, without using an LLM?
Not most of the time! Like, I sometimes ask multiple LLMs, but I don’t verify every fact that an LLM tells me, unless it’s a domain where I predict LLMs are particularly likely to hallucinate. I keep in mind that stuff is sometimes hallucinated, but most of the time it’s fine to know that something is quite probably true.
There’s no such thing as “a domain where LLMs are particularly likely to hallucinate”. In every domain there’s some obscure jagged boundary, not very far from normal standard questions to ask, where LLMs will hallucinate, usually plausibly to a non-expert.
To me, this sounds like you’re simply pushing the problem a little bit downstream without actually addressing it. You’re still not verifying the facts; you’re just getting another system with similar flaws to the first (you). You aren’t actually fact checking at any point.
That is not how bayesian evidence works. I am treating LLM output as somewhat less trustworthy than I would trust what a colleague of mine says, but not fundamentally different. I am skeptical that you spend your days double checking every conversation you have with another human. I also don’t think you should spend your days double checking every single thing an LLM tells you.
This feels kind of like the early conversations about Wikipedia where people kept trying to insist Wikipedia is “not a real source”.
If you’re asking a human about some even mildly specialized topic, like history of Spain in the 17th century or different crop rotation methods or ordinary differential equations, and there’s no special reason that they really want to appear like they know what they’re talking about, they’ll generally just say “IDK”. LLMs are much less like that IME. I think this is actually a big difference in practice, at least in the domains I’ve tried (reproductive biology). LLMs routinely give misleading / false / out-of-date / vague-but-deceptively-satiating summaries.
I agree the LLMs are somewhat worse, especially compared to rationalist-adjacent experts in specialized fields, but they really aren’t that bad for most things. Like I researched the state of the art of datacenter security practices yesterday, and I am not like 99% confident that the AI got everything right, but I am pretty sure it helped me understand the rough shape of things a lot better.
This seems fine and good—for laying some foundations, which you can use for your own further theorizing, which will make you ready to learn from more reliable + rich expert sources over time. Then you can report that stuff. If instead you’re directly reporting your immediately-post-LLM models, I currently don’t think I want to read that stuff, or would want a warning. (I’m not necessarily pushing for some big policy, that seems hard. I would push for personal standards though.)
Fwiw, in my experience LLMs lie far more than early Wikipedia or any human I know, and in subtler and harder to detect ways. My spot checks for accuracy have been so dismal/alarming that at this point I basically only use them as search engines to find things humans have said.
I am wondering whether your experiences were formed via the first generation of reasoning models, and my guess is you are also thinking of asking different kinds of questions.
The thing that LLMs are really great at is to speak and think in the ontology and structure that is prevalent among experts in any field. This is usually where the vast majority of evidence comes from. LLMs aren’t going to make up whole ontologies about how bankruptcy law works, or how datacenter security works. It might totally make up details, but it won’t make up the high-level picture.
Second, this has just gotten a lot better over the last 6 months. GPT-5 still lies a good amount, but vastly less than o1 or o3. I found o1 almost unusable on this dimension.
Datapoint: I’m currently setting up a recording studio at Lighthaven, and I am using them all the time to get guides for things like “how to change a setting on this camera” or “how to use this microphone” or “how to use this recording software”.
Yes, they confabulate menus and things a lot, but as long as I keep uploading photos of what I actually see, they know the basics much better than me (e.g. what bit rate to set the video vs the audio, where to look to kill the random white noise input I’m getting, etc).
I’d say they confabulate like 50% of the time but that they’re still a much more effective search engine for me than google, and can read the manual much faster than me. My guess is I simply couldn’t do some of the projects I’m doing without them.
It’s perfectly fine to have strong personal preferences for what content you consume, and how it’s filtered, and to express these preferences. I don’t think it’s cool to make hyperbolic accusations of violence. It erodes the distinctions we make between different levels of hostility that help prevent conflicts from escalating. I don’t think undisclosed LLM assistance can even be fairly characterized as deceptive, much less violent.
I don’t think it’s hyperbolic at all; I think this is in fact a central instance of the category I’m gesturing at as “epistemic violence.” For instance, p-hacking, lying, manipulation, misleading data, etc. If you don’t think that category is meaningful or you dislike my name for it, can you be more specific about why? Or why this is not an instance? Another commenter @Guive objected to my usage of the word violence here because “words can’t be violence” which is I think a small skirmish of a wider culture war which I am really not trying to talk about.
To be explicit (again) I do not in any way want to imply that somehow a person using an LLM without disclosing it justifies physical violence against them. I also don’t think it’s intentionally an aggression. But depending on the case, it CAN BE seriously negligent towards the truth and community truth seeking norms, and in that careless negligence it can damage the epistemics of others, when a simple disclaimer / “epistemic status” / source would have been VERY low effort to add. I have to admit I hesitate to say this so explicitly a bit because many people I respect use LLMs extensively, and I am not categorically against this, and I feel slightly bad about potentially burdening or just insulting them—generally speaking I feel some degree of social pressure against saying this. And as a result I hesitate to back down from my framing, without a better reason than that it feels uncomfortable and some people don’t like it.
Thanks for going into more detail. I don’t think “epistemic violence” is a good term for this category:
Violence generally describes intentional harm, where as p-hacking and misleading data are not always intentional
Violence generally describes harm that meets a certain threshold—flicking someone is technically violent, but it would be hyperbolic to describe it as such without more context.
I think a better term for this broad category might be “epistemic pollution”, as it describes filling the information environment with negative value stuff. I would be comfortable describing e.g. a confidence scheme or an impersonation scam as epistemic violence, although there would have to be some point to doing so.
In general, I’m skeptical of coining a novel term with strong connotations to try to argue a point—it’s basically the noncentral fallacy.
The way I think about this is a bit more like “somehow, we need immune systems against arbitrary nuanced persuasion.” Which is for sure a very hard problem, but, I don’t think simple tricks of “check if LLM influenced” will turn out to be that useful.
It seems like a good start—for instance, it would be potentially useful data.
I think at the very least you want more metadata about how the AI was used.
Something like “somehow automatically track metadata about how documents came to be and include it”, the way you might try to do with photography. (I guess the metaphor here is more like “have text documents automatically include info about what text entered via “paste” instead of by typing manually?”)
It tends to be bad (or at least costly) to have a rule that has the property that violations of the rule cannot reliably be detected, which leads to the question of how you propose to detect LLM-written content.
I agree. I really don’t like it. Like Buck posted earlier
And I don’t think he did anything against lesswrong rules, or anything immoral really, but I still really don’t like it.
If it was up to me, we’d have a rule that every single word in your post should either be physically typed by you, or be in quotation marks.
So its fine if you copy your article into some AI and ask it to fix grammar mistakes, as long as you go and fix them yourself.
Its also fine to have a fair bit of LLM involvement in the post, even conceptual stuff and writing, as long as the finished product is typed up by you.
What way I know every single word has at least passed through the brain of the author.
I don’t really believe there is any such thing as “epistemic violence.” In general, words are not violence.
Semantics; it’s obviously not equivalent to physical violence.
From the moment I understood the weakness of my flesh, it disgusted me