I think this essay has a reasonable point, but also goes a little too far. When you say things like
However, as it pertains to the writing process, the same advice to “develop your taste and judgment” demands the opposite outlook: do not use AI. When it feels hard to edit your writing, that’s what it feels like to exercise your mind and to excavate good ideas. While this belief is common, it’s rare for people to believe it as absolutely I do: I allow zero AI prose or even prose suggestions AT ALL. [...]
Eventually, that intuition doesn’t just help you choose between options as you would with AI outputs, but you become drawn towards the beauty of the next words which haven’t even materialized yet; you become possessed by the prophetic spirit of prosody, your feel for how language flows when it sounds right.
But that never happens when you write with AI from the outset.
Sometimes I will write something, notice that what I’ve written sounds clumsy but be unable to figure out how to make it better, and then ask an LLM to suggest alternative phrasings. By your arguments, this would imply that this would prevent me from getting any better at writing.
But that would imply that you also couldn’t learn from having an editor or English teacher go through what you’ve written and suggest edits. It’s not true that the most effective form of learning is to try thinking everything through yourself without resorting to any external feedback or assistance. Sometimes you simply get stuck or are able to recognize that something you’ve written is bad without being able to make it better, but are able to recognize an improvement when it is suggested to you.
In the past, for many of those sentences, I would just have concluded that I’m unhappy with this sentence but I’m not going to figure out anything better, so I’ll just leave it in and move on. Maybe it would have been better for my learning if I’d just spent several days thinking about that very specific sentence, but… I’m not going to do that. People do learn from having a good example modeled to them as well. So an AI having suggested an improvement, in the future it’s likely that my brain will be able to spontaneously generate a similar solution by itself, without AI help.
You do of course have a point in that you do need to put in genuine effort into the writing in order to improve. If you just ask an AI to write everything for you, then yeah, you’re not going to get better. But if you’ve spent a bit of time trying to make something work and then ask for suggestions for improvement, you can actually learn from that.
Sometimes I will write something, notice that what I’ve written sounds clumsy but be unable to figure out how to make it better, and then ask an LLM to suggest alternative phrasings. By your arguments, this would imply that this would prevent me from getting any better at writing.
Yes, I think it would prevent you from improving, insofar as you had room to improve tackling that yourself.
But that would imply that you also couldn’t learn from having an editor or English teacher go through what you’ve written and suggest edits.
[...]
People do learn from having a good example modeled to them as well. So an AI having suggested an improvement, in the future it’s likely that my brain will be able to spontaneously generate a similar solution by itself, without AI help.
I don’t think that recognizing an improvement will make you much better. I think that an editor or English teacher can (1) make your piece better by overwriting parts of it (which doesn’t improve your own skills) and (2) make you better by providing explanations/feedback of what’s wrong (but not by overwriting your work).
Trying to learn through indirect lessons via their rewrites is probably similar to how much you’ll improve at Tennis by watching Roger Federer play. Which is, a little, and more than if you hadn’t, but still far less than playing on the court yourself in that same time. The reason is that when you watch Federer play, it’s easy to identify the output of everything he does right, but you get no visibility into the subjective experience—what it’s like inside the mind and body—to play in such a way that results in good output.
To further illustrate when it comes to writing, consider me and my mom’s ESL experiences (footnote 1).[1]
I would argue there is still “generative effort” within editing (an AI’s) work, because you still need to come up with with reasons for what’s wrong, and alternatives that make it better. It’s magnitudes more “mental exercise” to articulate problems than to merely feel something is wrong. However, in the opposite direction, I think it’s magnitudes less generative effort to articulate problems vs. to write for yourself and “create something from nothing”. In my view, flexing the ‘generative muscle’ is what leads to improvement.
---
In the past, for many of those sentences, I would just have concluded that I’m unhappy with this sentence but I’m not going to figure out anything better, so I’ll just leave it in and move on.
This is a related reflex I’ve been actively trying to fight in my own writing process. Often I would come to a sentence, be unable to oneshot it, then [leave a rough outline in brackets to return to later]. These days, I’ve been trying to push through, and just write something passable. Partly because leaving work to return to duplicates my workload,[2] and partly because I believe that discipline trains my ‘generative muscle’.
First, me: I understand Russian but can’t speak it; classic immigrant child experience.[3] My understanding does not convert into generative capability. It’s a different subjective experience to grope and flail at trying to pluck words from an inaccessible aether, vs. skipping that and reviewing words in front of you. Like how a multiple choice test is easier to bullshit than free text questions.
Second, my mom: She reads plenty of English literature, and has a great sense for when its written well vs. poorly. But she is aware that her writing is, say, typical for an immigrant, and also that AI is ‘better’ but ‘soulless’. If all it took to improve was to notice deficiencies (or to know well-written examples), then couldn’t she just edit her emails until they’re good? But it’s not that easy for her.
6. Don’t waste time talking about what you plan to think about; instead, work through it immediately.
Intellectual laziness can easily sneak up on you. If you are sitting there talking about problems you plan to solve later, there’s a good chance you are being inefficient. Similarly, in GTD, you don’t put off tasks that only take a couple minutes. In many cases, you can outline and solve or at least clarify any decision or problem you’re confronted with in just a few minutes.
It’s commonly thought that children absorb languages effortlessly; but language acquisition studies make it clear that this is not so (cf. François Grosjean’s Life with Two Languages, 1984). Language learning takes effort even for children, and children seek to minimize that effort by learning only those languages they absolutely must, in order to communicate. For instance, once children of immigrants discover that their parents speak the national language, they may speak it with them, and retain only passive knowledge of the parents’ native language. Full language ability is likely to continue only when there are people in the child’s life that cannot understand the national language.
I don’t think that recognizing an improvement will make you much better. I think that an editor or English teacher can (1) make your piece better by overwriting parts of it (which doesn’t improve your own skills) and (2) make you better by providing explanations/feedback of what’s wrong (but not by overwriting your work).
I distinctly remember an English class in high school where we had to summarize an article, and our summary was to have a target length of 200 words or something like that. My first draft for the summary had something like 250 words, and I complained to our teacher that I just couldn’t shorten it without losing essential content. Everything I had was necessary!
She looked at what I had and then struck out redundant words within a couple of my sentences, reducing their length by something like half.
I looked at what she’d done and went “oh”. She didn’t need to explain to me why she’d struck out those particular words: when I looked at the resulting sentences, it was obvious that they worked just as well without the removed bits. Any explanation would have been as redundant as the words themselves.
Having received that demonstration, I started looking for more redundant words in my summary and managed to bring it down to something like 190 words. That one example had been enough that I could now do something that I’d literally considered impossible only two minutes ago.
I have many memories of learning working like this across a variety of domains, from language to math to interacting with children. Of course, some things need to be explained, but often just providing enough demonstrations of the correct pattern is more effective than trying to explain it explicitly. Often these things have lots of implicit rules that even the person applying them wouldn’t be able to articulate consciously, so if you tried to follow the explicit rationale, you’d get more confused than if you just trusted your brain to learn the pattern from demonstrations.
But of course sometimes it can be beneficial to get an explanation too. And… I find that at least Claude does usually also give me an explanation? Like, here’s an excerpt when I asked it to suggest improvements:
Me:
“If you are working on a task, you will want to know whether you are getting any closer to your goal. If you find that you are, then you are doing something right and can feel satisfied. If you are not, then you are probably doing something wrong and should change tactics.
In objective terms, waiting for a bus is not something you can do better or well. Time will pass at the same rate regardless of what you do.
But in subjective terms, you can make a difference. If you manage to think about something else and forget that you are waiting, it can make it feel like time is passing faster.
So the brain can apply some of the same machinery it uses to generate feedback signals for any other task. If I look at the time twice in rapid succession and see that it’s 11 minutes to the bus on both occasions, the brain can interpret that as “the thing you did failed to bring you any closer to the goal” and feel disappointment.
If I manage to think about something else and hold on for longer, such that it’s 8 minutes the next time I look at the clock, that can register as a great success! The thing that I did led to three whole minutes passing, a substantial improvement over zero!”
---
now “So the brain can apply some of the same machinery it uses to generate feedback signals for any other task” doesn’t feel clear enough, or feels like it’d need a bit more explanation. suggestions for that part?
Claude:
You’re right—that sentence is doing a lot of conceptual heavy lifting without enough setup. Here are a few ways to make it clearer:
Option 1 - More explicit about the mechanism: “So the brain can treat this like any other goal-directed activity, generating the same kind of feedback signals it would for a work project or exercise routine.”
Option 2 - Break it into two sentences: “So the brain treats waiting like it’s a task with measurable progress. It generates the same kind of feedback signals—satisfaction when you’re ‘succeeding,’ disappointment when you’re ‘failing’ - that it would for any goal you’re working toward.”
Option 3 - Lead with a concrete analogy: “It’s like how your brain gives you a little hit of satisfaction when you check off items on a to-do list, or feels frustrated when you’re stuck on a problem. The brain can apply this same reward-and-disappointment machinery to the ‘task’ of waiting.”
I lean toward option 2 because it maintains your conversational tone while being more specific about what you mean by “feedback signals” and “machinery.”
That’s both offering suggestions for improvement and providing a rationale for them. (Though in this particular case, I ultimately chose not to use any of the suggested versions and just removed the sentence entirely.)
Yes, your comment is shaping my opinion to concede that examples are useful for teaching ‘binary classification tasks’. “Here’s an example of how to identify redundant words/sentences which don’t contribute to your point,” or “here’s an example how you can write with more vivid language.”
Ie, if you’re new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it’s supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended “project management” tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you’ll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn’t impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it’s impossible to reverse-engineer “Which mental state would produce these changes in the first place, and how do I get that?”
For what it’s worth, when I see your example:
“So the brain can apply some of the same machinery it uses to generate feedback signals for any other task” doesn’t feel clear enough, or feels like it’d need a bit more explanation.
The way my mind parses it is:
The idea’s expression is not clear because its content is not clear, not the other way around:
“some of the same machinery” is a vague proxy for a specific function you actually mean
Convert to something like “reward circuitry”
“[...] any other task” is also an abstract / parent-category proxy for a specific classification
Convert to something like “non-instrumental tasks”[1]
So in the end you’d be left with something like:
Pre: “So the brain can apply some of the same machinery it uses to generate feedback signals for any other task”
Post: “So the brain can aim its reward circuitry even at non-instrumental tasks.”
I think that’s clearer, obviously pending alignment with (1) what you actually meant, and (2) your target audience. But that only happens after taking effort to clarify what you’re even trying to say, which is the ‘platonic’ writing process, and has lots of friction. If I just dropped the end result without my reasoning, it’s not clear what one could learn from the example to repeat next time, besides “just make it better”. But you do learn that by writing without assistance, clarifying your thoughts (or even asking AI to help you clarify what you mean).
And FWIW, I think this version is a lot better than what Claude came up with, I only reviewed those after. Which I think substantiates that if you did resort to Claude without taking the effort yourself, you would inherit AI barnacles that you don’t actually mean to collect. The Claudisms can be too abstract to even articulate—though, did you catch how each example said “[it’s] like [x]”? I wouldn’t have, unless I looked closer. And that’s the risk someone runs resorting to AI. “[it’s] like [x]” is a construction that evidently leads to wordier explanations. But an editor telling you to avoid that construction will not inoculate yourself from the next Claudism sneaking in (and won’t tell you in which cases that construction is appropriate).
Lastly, Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you’re even trying to say, rather than asking AI to “convert” your idea into fully phrased thoughts.
By the way, I have no issue with using AI like a tool, e.g., asking it “term for a task you can’t make progress in, as in, waiting for a bus couldn’t be done any better or worse than the one way”. But this only works well once you know what you really need, as opposed to asking for AI’s composition from the outset.
Ie, if you’re new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it’s supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended “project management” tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you’ll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I agree that just copying your friend’s workout does not teach you how to do all of those other skills.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. Doing that workout sufficiently many times will give you a feel of what it’s like to do that workout, how it usually ends up leaving you feeling, natural variation in the ways you end up doing it probably gives you some sense of what works better and what works worse, etc.. Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the “project management” tasks.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn’t impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it’s impossible to reverse-engineer “Which mental state would produce these changes in the first place, and how do I get that?”
I read this as saying “if you have an editor whose skill level is far above yours and they completely rewrite a paragraph that you’ve written, you may recognize that it is better, but simply seeing that rewritten paragraph isn’t sufficient to teach you how to write paragraphs that are equally good”.
I agree with that. But I’d note that:
“You won’t be able to reach their level” is a different claim from “you won’t be able to improve at all”. Yes, you might not learn how to produce a paragraph that has all of the improved elements, but you may be able to identify some specific aspects of it that are better and that you can incorporate into your writing.
If the skill gap is sufficiently big, you wouldn’t necessarily be able to reach their level even if they did explain their reasoning and mental state. It might allow you to improve, but still not reach their level, because you are missing prerequisites that can only be acquired by extended practice.
I read your original post as saying “using prose from AI will prevent you from improving at all”. If your position is just “using prose from AI won’t allow you to jump to the AI’s skill level right away”, then I agree, but that seems like a much less interesting claim.
The idea’s expression is not clear because its content is not clear, not the other way around. [...] But that only happens after taking effort to clarify what you’re even trying to say, which is the ‘platonic’ writing process, and has lots of friction.
That seems right to me. And I think the most useful part of Claude’s feedback was its comment of “that sentence is doing a lot of conceptual heavy lifting without enough setup”, and the fact that all of its suggestions were, as you say, pretty bad—those two together pointed me to the same thing that you’re saying, which is that my content wasn’t clear.
Sometimes when I give Claude a somewhat muddled idea that’s pointing to something useful, it’s able to find the idea I’m trying to express and crystallize it better than I managed. And sometimes I get results more like this, which indicate that it’s not clear what the idea even is that I am struggling to express. It doing so badly prompted me to think about “okay, I need to elaborate more on what exactly I mean… hmm, going into a longer explanation about how exactly reward mechanisms in the brain work would kind of distract from the overall narrative here… actually, do I need that sentence at all, what happens if I just cut it? ah, that works better.”
In one sense, this might not have been the best example to use, since ultimately I didn’t end up using any of Claude’s suggestions. (I mostly picked it because I couldn’t immediately remember any other specific example where I did end up using Claude’s non-fiction prose directly, and claude.ai’s functionality for searching old chats is bad, so I couldn’t quickly find a more relevant example.) But on the other hand, I do feel like it supports my point of “making use of AI prose doesn’t automatically make your writing worse”?
In that, when you say
Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you’re even trying to say, rather than asking AI to “convert” your idea into fully phrased thoughts.
then it’s true that I don’t remember Claude ever suggesting this. But nonetheless, asking Claude for prose suggestions did lead to me removing the passage. Asking AI for prose suggestions is what caused me to think through what I was even trying to say. Even though it could also have offered me a suggestion that I would have thought through and then accepted.
So I think that too also contradicts what I understood to be your post’s original message of “never ask AI for prose suggestions or you will learn nothing”. Yes, if you ask the AI for prose suggestions and just automatically put them in rather than doing any real thought, probably your learning will suffer. But if you ask the AI for prose suggestions and then think through whether they’re actually any better and what its feedback suggests about what you’ve already written so far, sometimes realizing that actually all of its suggestions are so bad that you’re better off trying an entirely different approach or deleting that bit… then I’d think that that facilitates learning.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. [...] Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the “project management” tasks.
Yeah, you’re right—I think the absolutism can pull the ladder up from beginners.
I’ll say, I see the absolutism that I advocate for as “aspirational absolutism”, like how the commandment “You shall not murder” should be understood as practically absolute (ie, people do not have license to set personal thresholds of which murder is okay, because that does not technically forbid setting a threshold of “this person inconveniences me”). But everyone also recognizes that there are edge cases when its okay to murder—i.e., war, self-defense. So, my essay is phrased strongly because I am trying to protect against the first kind permissiveness, but I do recognize there’s another class of AI being fine. The concern is people thinking they’re doing the second, as a cover for doing the first.
I read your original post as saying “using prose from AI will prevent you from improving at all”. If your position is just “using prose from AI won’t allow you to jump to the AI’s skill level right away”, then I agree, but that seems like a much less interesting claim.
I do think for people who already write on the internet, as opposed to complete beginners, AI does prevent 90-95%+ of potential improvement, and pragmatically speaking my point is true. Partly because “jump to the AI’s skill level right away” doesn’t apply: I think AI’s ability to “express ideas clearly” is often bad. Claude’s suggestions for you in this case is my prototypical example, because:
It provides a set of options, which “leads” a non-cautious author towards choosing the best one as a base & editing it. But it’s not apparent that they might all share the same fatal flaw (e.g., “[it’s] like [x]”, when that’s not necessary), and you won’t notice because it’s all that’s on offer.
I think this applies even to the Claude outputs you do end up accepting.
It provides options at all, but sometimes, the advice should be to subtract.
I also think its advice was technically wrong. When it says “that sentence is doing a lot of conceptual heavy lifting without enough setup”, that has two problems:
It’s actually not true; my rendition of the point is actually more brief, needing less setup.
Identifying a problem with that passage was not Claude’s contribution, it was your suggestion.
Therefore, someone listening to Claude will be led astray. When you say “Asking AI for prose suggestions is what caused me to think through what I was even trying to say” (ie, even though you didn’t take its advice) -- it means you have to be actively skeptical of outputs, which most people don’t have the endurance to always do—they only reach for AI when they’re already “exhausted,” so to speak. In practice, people will adopt poor artifacts of the writing which aren’t borne from clear thinking. So I still think you would learn more sitting and thinking.
I think this essay has a reasonable point, but also goes a little too far. When you say things like
Sometimes I will write something, notice that what I’ve written sounds clumsy but be unable to figure out how to make it better, and then ask an LLM to suggest alternative phrasings. By your arguments, this would imply that this would prevent me from getting any better at writing.
But that would imply that you also couldn’t learn from having an editor or English teacher go through what you’ve written and suggest edits. It’s not true that the most effective form of learning is to try thinking everything through yourself without resorting to any external feedback or assistance. Sometimes you simply get stuck or are able to recognize that something you’ve written is bad without being able to make it better, but are able to recognize an improvement when it is suggested to you.
In the past, for many of those sentences, I would just have concluded that I’m unhappy with this sentence but I’m not going to figure out anything better, so I’ll just leave it in and move on. Maybe it would have been better for my learning if I’d just spent several days thinking about that very specific sentence, but… I’m not going to do that. People do learn from having a good example modeled to them as well. So an AI having suggested an improvement, in the future it’s likely that my brain will be able to spontaneously generate a similar solution by itself, without AI help.
You do of course have a point in that you do need to put in genuine effort into the writing in order to improve. If you just ask an AI to write everything for you, then yeah, you’re not going to get better. But if you’ve spent a bit of time trying to make something work and then ask for suggestions for improvement, you can actually learn from that.
Yes, I think it would prevent you from improving, insofar as you had room to improve tackling that yourself.
I don’t think that recognizing an improvement will make you much better. I think that an editor or English teacher can (1) make your piece better by overwriting parts of it (which doesn’t improve your own skills) and (2) make you better by providing explanations/feedback of what’s wrong (but not by overwriting your work).
Trying to learn through indirect lessons via their rewrites is probably similar to how much you’ll improve at Tennis by watching Roger Federer play. Which is, a little, and more than if you hadn’t, but still far less than playing on the court yourself in that same time. The reason is that when you watch Federer play, it’s easy to identify the output of everything he does right, but you get no visibility into the subjective experience—what it’s like inside the mind and body—to play in such a way that results in good output.
To further illustrate when it comes to writing, consider me and my mom’s ESL experiences (footnote 1).[1]
I would argue there is still “generative effort” within editing (an AI’s) work, because you still need to come up with with reasons for what’s wrong, and alternatives that make it better. It’s magnitudes more “mental exercise” to articulate problems than to merely feel something is wrong. However, in the opposite direction, I think it’s magnitudes less generative effort to articulate problems vs. to write for yourself and “create something from nothing”. In my view, flexing the ‘generative muscle’ is what leads to improvement.
---
This is a related reflex I’ve been actively trying to fight in my own writing process. Often I would come to a sentence, be unable to oneshot it, then [leave a rough outline in brackets to return to later]. These days, I’ve been trying to push through, and just write something passable. Partly because leaving work to return to duplicates my workload,[2] and partly because I believe that discipline trains my ‘generative muscle’.
First, me: I understand Russian but can’t speak it; classic immigrant child experience.[3] My understanding does not convert into generative capability. It’s a different subjective experience to grope and flail at trying to pluck words from an inaccessible aether, vs. skipping that and reviewing words in front of you. Like how a multiple choice test is easier to bullshit than free text questions.
Second, my mom: She reads plenty of English literature, and has a great sense for when its written well vs. poorly. But she is aware that her writing is, say, typical for an immigrant, and also that AI is ‘better’ but ‘soulless’. If all it took to improve was to notice deficiencies (or to know well-written examples), then couldn’t she just edit her emails until they’re good? But it’s not that easy for her.
In his blog post Lessons from Peter Thiel, Joe Lonsdale writes:
In his blog post Proto-World and the Language Instinct, conlanger Mark Rosenfelder writes:
I distinctly remember an English class in high school where we had to summarize an article, and our summary was to have a target length of 200 words or something like that. My first draft for the summary had something like 250 words, and I complained to our teacher that I just couldn’t shorten it without losing essential content. Everything I had was necessary!
She looked at what I had and then struck out redundant words within a couple of my sentences, reducing their length by something like half.
I looked at what she’d done and went “oh”. She didn’t need to explain to me why she’d struck out those particular words: when I looked at the resulting sentences, it was obvious that they worked just as well without the removed bits. Any explanation would have been as redundant as the words themselves.
Having received that demonstration, I started looking for more redundant words in my summary and managed to bring it down to something like 190 words. That one example had been enough that I could now do something that I’d literally considered impossible only two minutes ago.
I have many memories of learning working like this across a variety of domains, from language to math to interacting with children. Of course, some things need to be explained, but often just providing enough demonstrations of the correct pattern is more effective than trying to explain it explicitly. Often these things have lots of implicit rules that even the person applying them wouldn’t be able to articulate consciously, so if you tried to follow the explicit rationale, you’d get more confused than if you just trusted your brain to learn the pattern from demonstrations.
But of course sometimes it can be beneficial to get an explanation too. And… I find that at least Claude does usually also give me an explanation? Like, here’s an excerpt when I asked it to suggest improvements:
That’s both offering suggestions for improvement and providing a rationale for them. (Though in this particular case, I ultimately chose not to use any of the suggested versions and just removed the sentence entirely.)
Yes, your comment is shaping my opinion to concede that examples are useful for teaching ‘binary classification tasks’. “Here’s an example of how to identify redundant words/sentences which don’t contribute to your point,” or “here’s an example how you can write with more vivid language.”
Ie, if you’re new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it’s supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended “project management” tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you’ll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn’t impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it’s impossible to reverse-engineer “Which mental state would produce these changes in the first place, and how do I get that?”
For what it’s worth, when I see your example:
The way my mind parses it is:
The idea’s expression is not clear because its content is not clear, not the other way around:
“some of the same machinery” is a vague proxy for a specific function you actually mean
Convert to something like “reward circuitry”
“[...] any other task” is also an abstract / parent-category proxy for a specific classification
Convert to something like “non-instrumental tasks”[1]
So in the end you’d be left with something like:
Pre: “So the brain can apply some of the same machinery it uses to generate feedback signals for any other task”
Post: “So the brain can aim its reward circuitry even at non-instrumental tasks.”
I think that’s clearer, obviously pending alignment with (1) what you actually meant, and (2) your target audience. But that only happens after taking effort to clarify what you’re even trying to say, which is the ‘platonic’ writing process, and has lots of friction. If I just dropped the end result without my reasoning, it’s not clear what one could learn from the example to repeat next time, besides “just make it better”. But you do learn that by writing without assistance, clarifying your thoughts (or even asking AI to help you clarify what you mean).
And FWIW, I think this version is a lot better than what Claude came up with, I only reviewed those after. Which I think substantiates that if you did resort to Claude without taking the effort yourself, you would inherit AI barnacles that you don’t actually mean to collect. The Claudisms can be too abstract to even articulate—though, did you catch how each example said “[it’s] like [x]”? I wouldn’t have, unless I looked closer. And that’s the risk someone runs resorting to AI. “[it’s] like [x]” is a construction that evidently leads to wordier explanations. But an editor telling you to avoid that construction will not inoculate yourself from the next Claudism sneaking in (and won’t tell you in which cases that construction is appropriate).
Lastly, Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you’re even trying to say, rather than asking AI to “convert” your idea into fully phrased thoughts.
By the way, I have no issue with using AI like a tool, e.g., asking it “term for a task you can’t make progress in, as in, waiting for a bus couldn’t be done any better or worse than the one way”. But this only works well once you know what you really need, as opposed to asking for AI’s composition from the outset.
I agree that just copying your friend’s workout does not teach you how to do all of those other skills.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. Doing that workout sufficiently many times will give you a feel of what it’s like to do that workout, how it usually ends up leaving you feeling, natural variation in the ways you end up doing it probably gives you some sense of what works better and what works worse, etc.. Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the “project management” tasks.
I read this as saying “if you have an editor whose skill level is far above yours and they completely rewrite a paragraph that you’ve written, you may recognize that it is better, but simply seeing that rewritten paragraph isn’t sufficient to teach you how to write paragraphs that are equally good”.
I agree with that. But I’d note that:
“You won’t be able to reach their level” is a different claim from “you won’t be able to improve at all”. Yes, you might not learn how to produce a paragraph that has all of the improved elements, but you may be able to identify some specific aspects of it that are better and that you can incorporate into your writing.
If the skill gap is sufficiently big, you wouldn’t necessarily be able to reach their level even if they did explain their reasoning and mental state. It might allow you to improve, but still not reach their level, because you are missing prerequisites that can only be acquired by extended practice.
I read your original post as saying “using prose from AI will prevent you from improving at all”. If your position is just “using prose from AI won’t allow you to jump to the AI’s skill level right away”, then I agree, but that seems like a much less interesting claim.
That seems right to me. And I think the most useful part of Claude’s feedback was its comment of “that sentence is doing a lot of conceptual heavy lifting without enough setup”, and the fact that all of its suggestions were, as you say, pretty bad—those two together pointed me to the same thing that you’re saying, which is that my content wasn’t clear.
Sometimes when I give Claude a somewhat muddled idea that’s pointing to something useful, it’s able to find the idea I’m trying to express and crystallize it better than I managed. And sometimes I get results more like this, which indicate that it’s not clear what the idea even is that I am struggling to express. It doing so badly prompted me to think about “okay, I need to elaborate more on what exactly I mean… hmm, going into a longer explanation about how exactly reward mechanisms in the brain work would kind of distract from the overall narrative here… actually, do I need that sentence at all, what happens if I just cut it? ah, that works better.”
In one sense, this might not have been the best example to use, since ultimately I didn’t end up using any of Claude’s suggestions. (I mostly picked it because I couldn’t immediately remember any other specific example where I did end up using Claude’s non-fiction prose directly, and claude.ai’s functionality for searching old chats is bad, so I couldn’t quickly find a more relevant example.) But on the other hand, I do feel like it supports my point of “making use of AI prose doesn’t automatically make your writing worse”?
In that, when you say
then it’s true that I don’t remember Claude ever suggesting this. But nonetheless, asking Claude for prose suggestions did lead to me removing the passage. Asking AI for prose suggestions is what caused me to think through what I was even trying to say. Even though it could also have offered me a suggestion that I would have thought through and then accepted.
So I think that too also contradicts what I understood to be your post’s original message of “never ask AI for prose suggestions or you will learn nothing”. Yes, if you ask the AI for prose suggestions and just automatically put them in rather than doing any real thought, probably your learning will suffer. But if you ask the AI for prose suggestions and then think through whether they’re actually any better and what its feedback suggests about what you’ve already written so far, sometimes realizing that actually all of its suggestions are so bad that you’re better off trying an entirely different approach or deleting that bit… then I’d think that that facilitates learning.
Yeah, you’re right—I think the absolutism can pull the ladder up from beginners.
I’ll say, I see the absolutism that I advocate for as “aspirational absolutism”, like how the commandment “You shall not murder” should be understood as practically absolute (ie, people do not have license to set personal thresholds of which murder is okay, because that does not technically forbid setting a threshold of “this person inconveniences me”). But everyone also recognizes that there are edge cases when its okay to murder—i.e., war, self-defense. So, my essay is phrased strongly because I am trying to protect against the first kind permissiveness, but I do recognize there’s another class of AI being fine. The concern is people thinking they’re doing the second, as a cover for doing the first.
I do think for people who already write on the internet, as opposed to complete beginners, AI does prevent 90-95%+ of potential improvement, and pragmatically speaking my point is true. Partly because “jump to the AI’s skill level right away” doesn’t apply: I think AI’s ability to “express ideas clearly” is often bad. Claude’s suggestions for you in this case is my prototypical example, because:
It provides a set of options, which “leads” a non-cautious author towards choosing the best one as a base & editing it. But it’s not apparent that they might all share the same fatal flaw (e.g., “[it’s] like [x]”, when that’s not necessary), and you won’t notice because it’s all that’s on offer.
I think this applies even to the Claude outputs you do end up accepting.
It provides options at all, but sometimes, the advice should be to subtract.
I also think its advice was technically wrong. When it says “that sentence is doing a lot of conceptual heavy lifting without enough setup”, that has two problems:
It’s actually not true; my rendition of the point is actually more brief, needing less setup.
Identifying a problem with that passage was not Claude’s contribution, it was your suggestion.
Therefore, someone listening to Claude will be led astray. When you say “Asking AI for prose suggestions is what caused me to think through what I was even trying to say” (ie, even though you didn’t take its advice) -- it means you have to be actively skeptical of outputs, which most people don’t have the endurance to always do—they only reach for AI when they’re already “exhausted,” so to speak. In practice, people will adopt poor artifacts of the writing which aren’t borne from clear thinking. So I still think you would learn more sitting and thinking.