Yes, your comment is shaping my opinion to concede that examples are useful for teaching ‘binary classification tasks’. “Here’s an example of how to identify redundant words/sentences which don’t contribute to your point,” or “here’s an example how you can write with more vivid language.”
Ie, if you’re new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it’s supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended “project management” tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you’ll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn’t impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it’s impossible to reverse-engineer “Which mental state would produce these changes in the first place, and how do I get that?”
For what it’s worth, when I see your example:
“So the brain can apply some of the same machinery it uses to generate feedback signals for any other task” doesn’t feel clear enough, or feels like it’d need a bit more explanation.
The way my mind parses it is:
The idea’s expression is not clear because its content is not clear, not the other way around:
“some of the same machinery” is a vague proxy for a specific function you actually mean
Convert to something like “reward circuitry”
“[...] any other task” is also an abstract / parent-category proxy for a specific classification
Convert to something like “non-instrumental tasks”[1]
So in the end you’d be left with something like:
Pre: “So the brain can apply some of the same machinery it uses to generate feedback signals for any other task”
Post: “So the brain can aim its reward circuitry even at non-instrumental tasks.”
I think that’s clearer, obviously pending alignment with (1) what you actually meant, and (2) your target audience. But that only happens after taking effort to clarify what you’re even trying to say, which is the ‘platonic’ writing process, and has lots of friction. If I just dropped the end result without my reasoning, it’s not clear what one could learn from the example to repeat next time, besides “just make it better”. But you do learn that by writing without assistance, clarifying your thoughts (or even asking AI to help you clarify what you mean).
And FWIW, I think this version is a lot better than what Claude came up with, I only reviewed those after. Which I think substantiates that if you did resort to Claude without taking the effort yourself, you would inherit AI barnacles that you don’t actually mean to collect. The Claudisms can be too abstract to even articulate—though, did you catch how each example said “[it’s] like [x]”? I wouldn’t have, unless I looked closer. And that’s the risk someone runs resorting to AI. “[it’s] like [x]” is a construction that evidently leads to wordier explanations. But an editor telling you to avoid that construction will not inoculate yourself from the next Claudism sneaking in (and won’t tell you in which cases that construction is appropriate).
Lastly, Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you’re even trying to say, rather than asking AI to “convert” your idea into fully phrased thoughts.
- ^
By the way, I have no issue with using AI like a tool, e.g., asking it “term for a task you can’t make progress in, as in, waiting for a bus couldn’t be done any better or worse than the one way”. But this only works well once you know what you really need, as opposed to asking for AI’s composition from the outset.
Yeah, you’re right—I think the absolutism can pull the ladder up from beginners.
I’ll say, I see the absolutism that I advocate for as “aspirational absolutism”, like how the commandment “You shall not murder” should be understood as practically absolute (ie, people do not have license to set personal thresholds of which murder is okay, because that does not technically forbid setting a threshold of “this person inconveniences me”). But everyone also recognizes that there are edge cases when its okay to murder—i.e., war, self-defense. So, my essay is phrased strongly because I am trying to protect against the first kind permissiveness, but I do recognize there’s another class of AI being fine. The concern is people thinking they’re doing the second, as a cover for doing the first.
I do think for people who already write on the internet, as opposed to complete beginners, AI does prevent 90-95%+ of potential improvement, and pragmatically speaking my point is true. Partly because “jump to the AI’s skill level right away” doesn’t apply: I think AI’s ability to “express ideas clearly” is often bad. Claude’s suggestions for you in this case is my prototypical example, because:
It provides a set of options, which “leads” a non-cautious author towards choosing the best one as a base & editing it. But it’s not apparent that they might all share the same fatal flaw (e.g., “[it’s] like [x]”, when that’s not necessary), and you won’t notice because it’s all that’s on offer.
I think this applies even to the Claude outputs you do end up accepting.
It provides options at all, but sometimes, the advice should be to subtract.
I also think its advice was technically wrong. When it says “that sentence is doing a lot of conceptual heavy lifting without enough setup”, that has two problems:
It’s actually not true; my rendition of the point is actually more brief, needing less setup.
Identifying a problem with that passage was not Claude’s contribution, it was your suggestion.
Therefore, someone listening to Claude will be led astray. When you say “Asking AI for prose suggestions is what caused me to think through what I was even trying to say” (ie, even though you didn’t take its advice) -- it means you have to be actively skeptical of outputs, which most people don’t have the endurance to always do—they only reach for AI when they’re already “exhausted,” so to speak. In practice, people will adopt poor artifacts of the writing which aren’t borne from clear thinking. So I still think you would learn more sitting and thinking.