Isn’t getting working/production-ready code done faster the definition of being better than you at coding? It’s possible the creator of Claude Code is incorrect about this and he would be more productive long-term writing this code himself, or the code is actually unacceptable in ways that he hasn’t noticed yet, but if he’s correct that it’s more productive to have Claude write it, then Claude is better at coding than him.
If I use my IDE’s LSP functions to do a large automated refactor, is the IDE better than me at coding?
There are many more elements to “coding” than “writing code”, namely software design. As a software engineer I use Claude Code daily (I write maybe 1% of my total LOC by hand these days), but I still have to steer it. I have to tell it which architecture to use, which abstractions, correct it when it tries to use a shortcut instead of solving a problem at the root, etc.
When it can produce PRs which would pass code review on a competent software engineering team without that steering, we will have a superhuman coder.
I was thinking of “coder” as specifically the job of writing code, which I assume is what the Claude Code guy meant too. AI is clearly not reliable at system design yet.
System design is one part of designing software, but isn’t so much what I’m trying to point at here.
Claude Opus 4.5 still can’t produce or follow a simple plan to implement a feature on a mid-sized codebase independently.
As an example: earlier today I was implementing the feature of resuming a session when a client reconnects to a server after losing connection. One small part of this task is re-syncing the state once the current (server-side) task has finished.
Claude Code was not capable of designing a functioning solution to this problem in its planning mode (it kept trying to sync the state immediately upon connecting, leading to the client missing the result of the in-progress task).
The solution I chose for this specific instance of the problem was to add a state sync command to the server’s command queue for that session when a client reconnects. Claude Code updated the plan to show the exact code changes required (correctly).
However, when implementing the plan, it forgot to actually make the relevant change to add the command to the queue. End-to-end tests caught this, and Claude’s solution was to automatically do a state sync after every task. It did not implement what was written in the plan. I gave it a nudge to re-read the plan, which was enough to make it see the mistake and correct it.
Compared to if I had asked a human co-worker to make the same change, the difference is stark. We are still a way off from superhuman coders.
Using AI can help you get things done faster, even if it’s worse than you at coding
Isn’t getting working/production-ready code done faster the definition of being better than you at coding? It’s possible the creator of Claude Code is incorrect about this and he would be more productive long-term writing this code himself, or the code is actually unacceptable in ways that he hasn’t noticed yet, but if he’s correct that it’s more productive to have Claude write it, then Claude is better at coding than him.
If I use my IDE’s LSP functions to do a large automated refactor, is the IDE better than me at coding?
There are many more elements to “coding” than “writing code”, namely software design. As a software engineer I use Claude Code daily (I write maybe 1% of my total LOC by hand these days), but I still have to steer it. I have to tell it which architecture to use, which abstractions, correct it when it tries to use a shortcut instead of solving a problem at the root, etc.
When it can produce PRs which would pass code review on a competent software engineering team without that steering, we will have a superhuman coder.
I was thinking of “coder” as specifically the job of writing code, which I assume is what the Claude Code guy meant too. AI is clearly not reliable at system design yet.
System design is one part of designing software, but isn’t so much what I’m trying to point at here.
Claude Opus 4.5 still can’t produce or follow a simple plan to implement a feature on a mid-sized codebase independently.
As an example: earlier today I was implementing the feature of resuming a session when a client reconnects to a server after losing connection. One small part of this task is re-syncing the state once the current (server-side) task has finished.
Claude Code was not capable of designing a functioning solution to this problem in its planning mode (it kept trying to sync the state immediately upon connecting, leading to the client missing the result of the in-progress task).
The solution I chose for this specific instance of the problem was to add a state sync command to the server’s command queue for that session when a client reconnects. Claude Code updated the plan to show the exact code changes required (correctly).
However, when implementing the plan, it forgot to actually make the relevant change to add the command to the queue. End-to-end tests caught this, and Claude’s solution was to automatically do a state sync after every task. It did not implement what was written in the plan. I gave it a nudge to re-read the plan, which was enough to make it see the mistake and correct it.
Compared to if I had asked a human co-worker to make the same change, the difference is stark. We are still a way off from superhuman coders.