I am not a “native writer”, (originally German, Hungarian) so naturally, I get assistance with grammar but not with formal content. You can already extrapolate the latter fact by my (hand drawn) TSP illustration on the iPad so this question is a bit annoying.
Fair enough. I found it unreadable in a way I associate with AI (lots of dense words, but tricky to extract the content out of them), and the em dashes are somewhat of a giveaway.
Given how much slop there is I do appreciate if people clarify what they used AI for because I don’t want to wade through a ton of slop which wasn’t even human written.
I will add a few observations/ideas on the topic. 1. It is often claimed that large language models overuse em dashes, but the matter is more nuanced (read an article explicitly on this topic) . Effective prose can employ em dashes for expressivity, and the choice is ultimately stylistic. I, for example, make frequent use of both em and en dashes and have come to prefer them in certain contexts. 2. There is a core epistemic concern: when we detect LLM-like features, we infer “this is produced by an LLM,” yet it does not follow inductively that every text exhibiting similar features must originate from one. Moreover, there is a form of survivorship bias in the texts we fail to identify as machine-generated. Additional complications arise when attempting to delineate where “slop” begins. Does it include grammar correction, stylistic adjustment, paraphrasing, revision, or prompt-driven rewriting? 3. The emphasis should rest primarily on content rather than on presentation. The central question is logical: could an LLM generate this material at scale? For instance, external references—such as video links, illustrations, or other artifacts—may alter that assessment.
For the content above, it stands on its own ground as a formal argument: Entropy (both energetic and algorithmic) behaves, in computational reasoning, exactly like a classical logical fixed point—capable of certifying a stabilized structure but never capable of generating or justifying that structure. Latter is always outside the system. It’s a bit related to Löb’s Theorem, which many people take as a strengthening, some as a logic fixed point. Whatever it is, classic theory of computation (1930-1970) is overlooked in 2025,
I want to see the post without AI-assisted editing. In the conversation with an AI from which the post’s output came, I’d rather have seen the prompts, not the output. Give me the low-effort input, give me the typos and grammar errors. Most posts have flaws; it makes it easier for people to find many of them if you don’t have an AI put a layer of paint over your writing. For a good effort post, it’s in my opinion important to be able to find what flaws remain. I am not at all saying this is a bad post, just that I don’t like having to read it secondhand; I definitely appreciate AI’s thoughts on it, but AI can only speak for itself, not for you.
This sounds like you are giving advice for someone that is rather interested in writing posts, not developing formal arguments… In logic we are searching for transitivity and rigor, not some modulo aesthetic preference, perhaps you overlooked that the content above is “somewhat” independent of its prosaic form.
”Most posts have flaws”
Where to find flaws in the argument presented… There could be several places where my reasoning can be challenged, the key is to do so without endorsing Yudkowsky’s views.
The game is a bit unfair, as it inherently easier to “destroy” positivistic arguments with first order inference: entropy, smoothness etc. is basically where all limitative theorems reside, arguments relying on these often collapse under scrutiny, most of the time the failure is baked into the system.
The whole “less wrong shtick” forgets that sometimes there is only “wrong”. As if it were so simple to convert smartness into casino winnings… Different people would be rich.
Note: I will slowly start to delete posts like this due to irrelevancy and redundancy.
“Editing with” AI does not consistently preserve semantics, or I wouldn’t ask. Incidentally, lesswrong has a human editor available, just ask in the intercom bubble.
There are many places where I’m confused, and suspect semantics were not preserved, and it would be much more efficient to show me the prompts than for me to nitpick all the little points. The post seems to overall make an interesting point. I am not telling you it’s a bad post. May I see the prompts?
Actually semantics “can” be conserved, an example.
- Change in semantics: “This is my second car” vs “this is my second skateboard”. - Change in syntactics: “This is my second car” vs “this is my third car”. - No formal change in the strict sense: “This is my second car” vs “This is—actually—my second car!”
Change during prompting: “Can be both, either, neither.”
Considering the prompting itself:
I copy/pasted a rough drafts (2-4 paragraphs?) and wrote something like: “Improve.” And then I edited again.
On the opaqueness:
The problem could be “limitative” thinking. Most of weak logics is, eliminative (e.g. implication A → B is elimination of A etc.). We can only look for contradictions and check for soundness.
On the post:
We can reduce it to a premise:
- “Every physically lawful protein has a well-defined ground-state structure determined by a finite reduction.” ≈ “Given unbounded time/energy, we can in principle compute the fold of any amino-acid sequence.”
and show, that from:
- “Biological evolution, using only local variation and selection, has discovered many stable, functional proteins.”
...it does NOT follow that:
- “Therefore the space of evolution-accessible proteins has simple, exploitable structure, (shallow energy landscapes) so we should expect to be able to build a practical ML system that reliably predicts their structures (or designs new ones) given enough data and compute.”
More context:
- Under a finite reduction (a model of folding with well-ordering property on a given lattice), protein folding is expressive enough to simulate arbitrary computation: Hence, the general folding problem is Turing-complete.
So: Every computable function must impose a final blur over that discrete lattice: a smoothing, regularization, or bounded-precision step that ensures computability. Without this blur, the system would risk confronting an unsolvable folding instance that would take forever. Excluding such cases implicitly constitutes a decision about an undecidable set. (Interestingly the ‘blur’ itself has also a ‘blur’, call it sigmoid, division etc.)
We can never determine exactly what potential structure is lost in this process, which regions of the formal folding landscape are suppressed or merged by the regularization that keeps the model finite and tractable, unless we train a better model that is sharper, but the problem repeats.
And yes, that means: We don’t know if ML will produce diamond bacteria. Could be, could be not.
On Bayesianism:
Since it is popular here, the failure could be explained as: No Bayesian system can coherently answer the meta-question:
- “What is the probability that I can correctly assign probabilities?”
I want to see the post without AI-assisted editing
I don’t care. I would expect an argument that goes “this post was made by AI, we don’t know exactly which points are AI-assisted and which aren’t, therefore we can ignore the content and demand the post unadulterated,” from another community, but not this one. Here we believe that argument screens off authority, so it shouldn’t matter whether an AI was involved in writing this piece or not, it shouldn’t even matter whether AI is making the arguments or the original author (although, unless you think OP is lying, we know the arguments come from OP).
It would be nice for LessWrong to be a welcoming place for non-native English speakers to voice their opinions, and this kind of hazing runs directly counter to that. If you don’t think the argument was written well, fine. Say that. Explain why. Help OP improve. Don’t demand that they write without an editor.
I am not a “native writer”, (originally German, Hungarian) so naturally, I get assistance with grammar but not with formal content. You can already extrapolate the latter fact by my (hand drawn) TSP illustration on the iPad so this question is a bit annoying.
Fair enough. I found it unreadable in a way I associate with AI (lots of dense words, but tricky to extract the content out of them), and the em dashes are somewhat of a giveaway.
Given how much slop there is I do appreciate if people clarify what they used AI for because I don’t want to wade through a ton of slop which wasn’t even human written.
Thanks for replying.
Sure, thank you for thanking.
I will add a few observations/ideas on the topic.
1. It is often claimed that large language models overuse em dashes, but the matter is more nuanced (read an article explicitly on this topic) . Effective prose can employ em dashes for expressivity, and the choice is ultimately stylistic. I, for example, make frequent use of both em and en dashes and have come to prefer them in certain contexts.
2. There is a core epistemic concern: when we detect LLM-like features, we infer “this is produced by an LLM,” yet it does not follow inductively that every text exhibiting similar features must originate from one. Moreover, there is a form of survivorship bias in the texts we fail to identify as machine-generated. Additional complications arise when attempting to delineate where “slop” begins. Does it include grammar correction, stylistic adjustment, paraphrasing, revision, or prompt-driven rewriting?
3. The emphasis should rest primarily on content rather than on presentation. The central question is logical: could an LLM generate this material at scale? For instance, external references—such as video links, illustrations, or other artifacts—may alter that assessment.
For the content above, it stands on its own ground as a formal argument: Entropy (both energetic and algorithmic) behaves, in computational reasoning, exactly like a classical logical fixed point—capable of certifying a stabilized structure but never capable of generating or justifying that structure. Latter is always outside the system. It’s a bit related to Löb’s Theorem, which many people take as a strengthening, some as a logic fixed point. Whatever it is, classic theory of computation (1930-1970) is overlooked in 2025,
I want to see the post without AI-assisted editing. In the conversation with an AI from which the post’s output came, I’d rather have seen the prompts, not the output. Give me the low-effort input, give me the typos and grammar errors. Most posts have flaws; it makes it easier for people to find many of them if you don’t have an AI put a layer of paint over your writing. For a good effort post, it’s in my opinion important to be able to find what flaws remain. I am not at all saying this is a bad post, just that I don’t like having to read it secondhand; I definitely appreciate AI’s thoughts on it, but AI can only speak for itself, not for you.
This sounds like you are giving advice for someone that is rather interested in writing posts, not developing formal arguments… In logic we are searching for transitivity and rigor, not some modulo aesthetic preference, perhaps you overlooked that the content above is “somewhat” independent of its prosaic form.
”Most posts have flaws”
Where to find flaws in the argument presented… There could be several places where my reasoning can be challenged, the key is to do so without endorsing Yudkowsky’s views.
The game is a bit unfair, as it inherently easier to “destroy” positivistic arguments with first order inference: entropy, smoothness etc. is basically where all limitative theorems reside, arguments relying on these often collapse under scrutiny, most of the time the failure is baked into the system.
The whole “less wrong shtick” forgets that sometimes there is only “wrong”. As if it were so simple to convert smartness into casino winnings… Different people would be rich.
Note: I will slowly start to delete posts like this due to irrelevancy and redundancy.
(not enhanced)
“Editing with” AI does not consistently preserve semantics, or I wouldn’t ask. Incidentally, lesswrong has a human editor available, just ask in the intercom bubble.
There are many places where I’m confused, and suspect semantics were not preserved, and it would be much more efficient to show me the prompts than for me to nitpick all the little points. The post seems to overall make an interesting point. I am not telling you it’s a bad post. May I see the prompts?
On semantics:
Actually semantics “can” be conserved, an example.
- Change in semantics: “This is my second car” vs “this is my second skateboard”.
- Change in syntactics: “This is my second car” vs “this is my third car”.
- No formal change in the strict sense: “This is my second car” vs “This is—actually—my second car!”
Change during prompting: “Can be both, either, neither.”
Considering the prompting itself:
I copy/pasted a rough drafts (2-4 paragraphs?) and wrote something like: “Improve.” And then I edited again.
On the opaqueness:
The problem could be “limitative” thinking. Most of weak logics is, eliminative (e.g. implication A → B is elimination of A etc.). We can only look for contradictions and check for soundness.
On the post:
We can reduce it to a premise:
- “Every physically lawful protein has a well-defined ground-state structure determined by a finite reduction.” ≈ “Given unbounded time/energy, we can in principle compute the fold of any amino-acid sequence.”
and show, that from:
- “Biological evolution, using only local variation and selection, has discovered many stable, functional proteins.”
...it does NOT follow that:
- “Therefore the space of evolution-accessible proteins has simple, exploitable structure, (shallow energy landscapes) so we should expect to be able to build a practical ML system that reliably predicts their structures (or designs new ones) given enough data and compute.”
More context:
- Under a finite reduction (a model of folding with well-ordering property on a given lattice), protein folding is expressive enough to simulate arbitrary computation: Hence, the general folding problem is Turing-complete.
So: Every computable function must impose a final blur over that discrete lattice: a smoothing, regularization, or bounded-precision step that ensures computability. Without this blur, the system would risk confronting an unsolvable folding instance that would take forever. Excluding such cases implicitly constitutes a decision about an undecidable set. (Interestingly the ‘blur’ itself has also a ‘blur’, call it sigmoid, division etc.)
We can never determine exactly what potential structure is lost in this process, which regions of the formal folding landscape are suppressed or merged by the regularization that keeps the model finite and tractable, unless we train a better model that is sharper, but the problem repeats.
And yes, that means: We don’t know if ML will produce diamond bacteria. Could be, could be not.
On Bayesianism:
Since it is popular here, the failure could be explained as: No Bayesian system can coherently answer the meta-question:
- “What is the probability that I can correctly assign probabilities?”
I don’t care. I would expect an argument that goes “this post was made by AI, we don’t know exactly which points are AI-assisted and which aren’t, therefore we can ignore the content and demand the post unadulterated,” from another community, but not this one. Here we believe that argument screens off authority, so it shouldn’t matter whether an AI was involved in writing this piece or not, it shouldn’t even matter whether AI is making the arguments or the original author (although, unless you think OP is lying, we know the arguments come from OP).
It would be nice for LessWrong to be a welcoming place for non-native English speakers to voice their opinions, and this kind of hazing runs directly counter to that. If you don’t think the argument was written well, fine. Say that. Explain why. Help OP improve. Don’t demand that they write without an editor.