I read your title and thought “exactly!”. I then read your post and it was pretty much exactly what I expected after reading the title. So, ironically, it seems like you perfectly compressed the state of your mind into a few words. :) But to be fair, that’s probably mostly because we’ve made very similar experiences and doesn’t translate to human<->LLM communication.
When vibe-coding, many things work really fast, but I often end up in these cases where the thing I want changed is very nuanced and I can see that just blurting it out would cause the LLM to do something different from what I have in mind. So I sometimes have to write like 5 paragraphs to describe one relatively small change. Then the LLM comes up with a plan which I have to read, which again takes time, and sometimes there are 1-2 more details to clear up, so it’s a whole process, and all of this would kind of work naturally without me even noticing if I were to write the code.
A year ago I wrote a post in a somewhat similar direction, but the recent months of vibe coding with Opus 4.5 really gave me a new appreciation for all the different bottlenecks that remain. Once “writing code” is automated—which is basically now—it’s not like programmers are instantly replaced (evidently), we just hop on to the next bottleneck below. So, the average programmer will maybe be sped up by some percentage, with only extreme outliers getting a multiple-fold increase in output, and the rest merely shifts to focus on different things in their work. It’s still kind of mindblowing to me that that’s how it is. Perhaps it gets “solved” once the entire stack, from CEO to PM to testers to programmers, is AIs—but then I guess they would also have to communicate via not-flawlessly-efficient means with each other (and sometimes themselves, until continual learning is solved), and would still run into these coordination overhead issues? But I guess all that overhead is less notable when the systems themselves run at 100x our speed and work 24h/day.
I read your title and thought “exactly!”. I then read your post and it was pretty much exactly what I expected after reading the title. So, ironically, it seems like you perfectly compressed the state of your mind into a few words. :) But to be fair, that’s probably mostly because we’ve made very similar experiences and doesn’t translate to human<->LLM communication.
When vibe-coding, many things work really fast, but I often end up in these cases where the thing I want changed is very nuanced and I can see that just blurting it out would cause the LLM to do something different from what I have in mind. So I sometimes have to write like 5 paragraphs to describe one relatively small change. Then the LLM comes up with a plan which I have to read, which again takes time, and sometimes there are 1-2 more details to clear up, so it’s a whole process, and all of this would kind of work naturally without me even noticing if I were to write the code.
A year ago I wrote a post in a somewhat similar direction, but the recent months of vibe coding with Opus 4.5 really gave me a new appreciation for all the different bottlenecks that remain. Once “writing code” is automated—which is basically now—it’s not like programmers are instantly replaced (evidently), we just hop on to the next bottleneck below. So, the average programmer will maybe be sped up by some percentage, with only extreme outliers getting a multiple-fold increase in output, and the rest merely shifts to focus on different things in their work. It’s still kind of mindblowing to me that that’s how it is. Perhaps it gets “solved” once the entire stack, from CEO to PM to testers to programmers, is AIs—but then I guess they would also have to communicate via not-flawlessly-efficient means with each other (and sometimes themselves, until continual learning is solved), and would still run into these coordination overhead issues? But I guess all that overhead is less notable when the systems themselves run at 100x our speed and work 24h/day.