Ie, if you’re new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it’s supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended “project management” tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you’ll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I agree that just copying your friend’s workout does not teach you how to do all of those other skills.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. Doing that workout sufficiently many times will give you a feel of what it’s like to do that workout, how it usually ends up leaving you feeling, natural variation in the ways you end up doing it probably gives you some sense of what works better and what works worse, etc.. Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the “project management” tasks.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn’t impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it’s impossible to reverse-engineer “Which mental state would produce these changes in the first place, and how do I get that?”
I read this as saying “if you have an editor whose skill level is far above yours and they completely rewrite a paragraph that you’ve written, you may recognize that it is better, but simply seeing that rewritten paragraph isn’t sufficient to teach you how to write paragraphs that are equally good”.
I agree with that. But I’d note that:
“You won’t be able to reach their level” is a different claim from “you won’t be able to improve at all”. Yes, you might not learn how to produce a paragraph that has all of the improved elements, but you may be able to identify some specific aspects of it that are better and that you can incorporate into your writing.
If the skill gap is sufficiently big, you wouldn’t necessarily be able to reach their level even if they did explain their reasoning and mental state. It might allow you to improve, but still not reach their level, because you are missing prerequisites that can only be acquired by extended practice.
I read your original post as saying “using prose from AI will prevent you from improving at all”. If your position is just “using prose from AI won’t allow you to jump to the AI’s skill level right away”, then I agree, but that seems like a much less interesting claim.
The idea’s expression is not clear because its content is not clear, not the other way around. [...] But that only happens after taking effort to clarify what you’re even trying to say, which is the ‘platonic’ writing process, and has lots of friction.
That seems right to me. And I think the most useful part of Claude’s feedback was its comment of “that sentence is doing a lot of conceptual heavy lifting without enough setup”, and the fact that all of its suggestions were, as you say, pretty bad—those two together pointed me to the same thing that you’re saying, which is that my content wasn’t clear.
Sometimes when I give Claude a somewhat muddled idea that’s pointing to something useful, it’s able to find the idea I’m trying to express and crystallize it better than I managed. And sometimes I get results more like this, which indicate that it’s not clear what the idea even is that I am struggling to express. It doing so badly prompted me to think about “okay, I need to elaborate more on what exactly I mean… hmm, going into a longer explanation about how exactly reward mechanisms in the brain work would kind of distract from the overall narrative here… actually, do I need that sentence at all, what happens if I just cut it? ah, that works better.”
In one sense, this might not have been the best example to use, since ultimately I didn’t end up using any of Claude’s suggestions. (I mostly picked it because I couldn’t immediately remember any other specific example where I did end up using Claude’s non-fiction prose directly, and claude.ai’s functionality for searching old chats is bad, so I couldn’t quickly find a more relevant example.) But on the other hand, I do feel like it supports my point of “making use of AI prose doesn’t automatically make your writing worse”?
In that, when you say
Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you’re even trying to say, rather than asking AI to “convert” your idea into fully phrased thoughts.
then it’s true that I don’t remember Claude ever suggesting this. But nonetheless, asking Claude for prose suggestions did lead to me removing the passage. Asking AI for prose suggestions is what caused me to think through what I was even trying to say. Even though it could also have offered me a suggestion that I would have thought through and then accepted.
So I think that too also contradicts what I understood to be your post’s original message of “never ask AI for prose suggestions or you will learn nothing”. Yes, if you ask the AI for prose suggestions and just automatically put them in rather than doing any real thought, probably your learning will suffer. But if you ask the AI for prose suggestions and then think through whether they’re actually any better and what its feedback suggests about what you’ve already written so far, sometimes realizing that actually all of its suggestions are so bad that you’re better off trying an entirely different approach or deleting that bit… then I’d think that that facilitates learning.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. [...] Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the “project management” tasks.
Yeah, you’re right—I think the absolutism can pull the ladder up from beginners.
I’ll say, I see the absolutism that I advocate for as “aspirational absolutism”, like how the commandment “You shall not murder” should be understood as practically absolute (ie, people do not have license to set personal thresholds of which murder is okay, because that does not technically forbid setting a threshold of “this person inconveniences me”). But everyone also recognizes that there are edge cases when its okay to murder—i.e., war, self-defense. So, my essay is phrased strongly because I am trying to protect against the first kind permissiveness, but I do recognize there’s another class of AI being fine. The concern is people thinking they’re doing the second, as a cover for doing the first.
I read your original post as saying “using prose from AI will prevent you from improving at all”. If your position is just “using prose from AI won’t allow you to jump to the AI’s skill level right away”, then I agree, but that seems like a much less interesting claim.
I do think for people who already write on the internet, as opposed to complete beginners, AI does prevent 90-95%+ of potential improvement, and pragmatically speaking my point is true. Partly because “jump to the AI’s skill level right away” doesn’t apply: I think AI’s ability to “express ideas clearly” is often bad. Claude’s suggestions for you in this case is my prototypical example, because:
It provides a set of options, which “leads” a non-cautious author towards choosing the best one as a base & editing it. But it’s not apparent that they might all share the same fatal flaw (e.g., “[it’s] like [x]”, when that’s not necessary), and you won’t notice because it’s all that’s on offer.
I think this applies even to the Claude outputs you do end up accepting.
It provides options at all, but sometimes, the advice should be to subtract.
I also think its advice was technically wrong. When it says “that sentence is doing a lot of conceptual heavy lifting without enough setup”, that has two problems:
It’s actually not true; my rendition of the point is actually more brief, needing less setup.
Identifying a problem with that passage was not Claude’s contribution, it was your suggestion.
Therefore, someone listening to Claude will be led astray. When you say “Asking AI for prose suggestions is what caused me to think through what I was even trying to say” (ie, even though you didn’t take its advice) -- it means you have to be actively skeptical of outputs, which most people don’t have the endurance to always do—they only reach for AI when they’re already “exhausted,” so to speak. In practice, people will adopt poor artifacts of the writing which aren’t borne from clear thinking. So I still think you would learn more sitting and thinking.
I agree that just copying your friend’s workout does not teach you how to do all of those other skills.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. Doing that workout sufficiently many times will give you a feel of what it’s like to do that workout, how it usually ends up leaving you feeling, natural variation in the ways you end up doing it probably gives you some sense of what works better and what works worse, etc.. Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the “project management” tasks.
I read this as saying “if you have an editor whose skill level is far above yours and they completely rewrite a paragraph that you’ve written, you may recognize that it is better, but simply seeing that rewritten paragraph isn’t sufficient to teach you how to write paragraphs that are equally good”.
I agree with that. But I’d note that:
“You won’t be able to reach their level” is a different claim from “you won’t be able to improve at all”. Yes, you might not learn how to produce a paragraph that has all of the improved elements, but you may be able to identify some specific aspects of it that are better and that you can incorporate into your writing.
If the skill gap is sufficiently big, you wouldn’t necessarily be able to reach their level even if they did explain their reasoning and mental state. It might allow you to improve, but still not reach their level, because you are missing prerequisites that can only be acquired by extended practice.
I read your original post as saying “using prose from AI will prevent you from improving at all”. If your position is just “using prose from AI won’t allow you to jump to the AI’s skill level right away”, then I agree, but that seems like a much less interesting claim.
That seems right to me. And I think the most useful part of Claude’s feedback was its comment of “that sentence is doing a lot of conceptual heavy lifting without enough setup”, and the fact that all of its suggestions were, as you say, pretty bad—those two together pointed me to the same thing that you’re saying, which is that my content wasn’t clear.
Sometimes when I give Claude a somewhat muddled idea that’s pointing to something useful, it’s able to find the idea I’m trying to express and crystallize it better than I managed. And sometimes I get results more like this, which indicate that it’s not clear what the idea even is that I am struggling to express. It doing so badly prompted me to think about “okay, I need to elaborate more on what exactly I mean… hmm, going into a longer explanation about how exactly reward mechanisms in the brain work would kind of distract from the overall narrative here… actually, do I need that sentence at all, what happens if I just cut it? ah, that works better.”
In one sense, this might not have been the best example to use, since ultimately I didn’t end up using any of Claude’s suggestions. (I mostly picked it because I couldn’t immediately remember any other specific example where I did end up using Claude’s non-fiction prose directly, and claude.ai’s functionality for searching old chats is bad, so I couldn’t quickly find a more relevant example.) But on the other hand, I do feel like it supports my point of “making use of AI prose doesn’t automatically make your writing worse”?
In that, when you say
then it’s true that I don’t remember Claude ever suggesting this. But nonetheless, asking Claude for prose suggestions did lead to me removing the passage. Asking AI for prose suggestions is what caused me to think through what I was even trying to say. Even though it could also have offered me a suggestion that I would have thought through and then accepted.
So I think that too also contradicts what I understood to be your post’s original message of “never ask AI for prose suggestions or you will learn nothing”. Yes, if you ask the AI for prose suggestions and just automatically put them in rather than doing any real thought, probably your learning will suffer. But if you ask the AI for prose suggestions and then think through whether they’re actually any better and what its feedback suggests about what you’ve already written so far, sometimes realizing that actually all of its suggestions are so bad that you’re better off trying an entirely different approach or deleting that bit… then I’d think that that facilitates learning.
Yeah, you’re right—I think the absolutism can pull the ladder up from beginners.
I’ll say, I see the absolutism that I advocate for as “aspirational absolutism”, like how the commandment “You shall not murder” should be understood as practically absolute (ie, people do not have license to set personal thresholds of which murder is okay, because that does not technically forbid setting a threshold of “this person inconveniences me”). But everyone also recognizes that there are edge cases when its okay to murder—i.e., war, self-defense. So, my essay is phrased strongly because I am trying to protect against the first kind permissiveness, but I do recognize there’s another class of AI being fine. The concern is people thinking they’re doing the second, as a cover for doing the first.
I do think for people who already write on the internet, as opposed to complete beginners, AI does prevent 90-95%+ of potential improvement, and pragmatically speaking my point is true. Partly because “jump to the AI’s skill level right away” doesn’t apply: I think AI’s ability to “express ideas clearly” is often bad. Claude’s suggestions for you in this case is my prototypical example, because:
It provides a set of options, which “leads” a non-cautious author towards choosing the best one as a base & editing it. But it’s not apparent that they might all share the same fatal flaw (e.g., “[it’s] like [x]”, when that’s not necessary), and you won’t notice because it’s all that’s on offer.
I think this applies even to the Claude outputs you do end up accepting.
It provides options at all, but sometimes, the advice should be to subtract.
I also think its advice was technically wrong. When it says “that sentence is doing a lot of conceptual heavy lifting without enough setup”, that has two problems:
It’s actually not true; my rendition of the point is actually more brief, needing less setup.
Identifying a problem with that passage was not Claude’s contribution, it was your suggestion.
Therefore, someone listening to Claude will be led astray. When you say “Asking AI for prose suggestions is what caused me to think through what I was even trying to say” (ie, even though you didn’t take its advice) -- it means you have to be actively skeptical of outputs, which most people don’t have the endurance to always do—they only reach for AI when they’re already “exhausted,” so to speak. In practice, people will adopt poor artifacts of the writing which aren’t borne from clear thinking. So I still think you would learn more sitting and thinking.