it turns you from a developer into more of a product manager role
but the developers you manage are a) occasionally stupid/unwise and b) extremely fast and never tired
this makes it relatively addictive, because feedback cycles are much shorter than for a “real” product manager, who often has to wait for weeks to see their wishes turn into software, and you have a strong element of randomness in your rewards, with things sometimes turning out surprisingly well one-shot, but sometimes not at all
It can also lead to laziness, as it’s very tempting to getting used to “just letting the AI do it” even in not primarily vibe-coded projects, instead of investing one’s own brainpower
AI agents tend to never/rarely talk back or tell you that something is a bad idea or doesn’t work well with the current architecture; they just do things as best as currently possible. This form of local optimization quickly runs into walls if not carefully mitigated by you.
Part of the problem is that by default the AI has extremely little context and knows little about the purpose, scope and ambition of your project. So when you tell it “do X”, it typically can’t tell whether you mean “do X quick and dirty, I just wand the results asap” or “lay out a 10 step plan to do X in the most sustainable way possible that allows us to eventually reach points Y and Z in the future”. If it gets things wrong in either direction, that tends to be frustrating, but it can’t read your mind (yet).
AI agents that are able to run unit tests and end-2-end tests and see compiler errors are so much more useful than their blind counterparts
If you need some particular piece of software but are unsure if current AIs will be able to deliver, it might make sense to write a detailed, self-contained and as-complete-as-possible specification of it, to then throw it at an AI agent whenever a new model (or scaffolding) comes out. Github Copilot with GPT5 was able to do many more things than I would have imagined, with non-trivial but still relatively limited oversight.
I haven’t tried yet if just letting it to its thing, saying only “continue” after each iteration, may be sufficient. Maybe I put more time into guiding it than would actually be necessary.
That being said: writing a self-contained specification that contains your entire idea of something with all the details nailed down such that there is little room for misunderstandings is surprisingly hard. There are probably cases where just writing the software yourself (if you can) takes less time than fully specifying it.
That being said, “writing down a specification” can also happen interview-style using an AI’s voice mode, so you can do it while doing chores.
Some quick thoughts on vibe coding:
it turns you from a developer into more of a product manager role
but the developers you manage are a) occasionally stupid/unwise and b) extremely fast and never tired
this makes it relatively addictive, because feedback cycles are much shorter than for a “real” product manager, who often has to wait for weeks to see their wishes turn into software, and you have a strong element of randomness in your rewards, with things sometimes turning out surprisingly well one-shot, but sometimes not at all
It can also lead to laziness, as it’s very tempting to getting used to “just letting the AI do it” even in not primarily vibe-coded projects, instead of investing one’s own brainpower
AI agents tend to never/rarely talk back or tell you that something is a bad idea or doesn’t work well with the current architecture; they just do things as best as currently possible. This form of local optimization quickly runs into walls if not carefully mitigated by you.
Part of the problem is that by default the AI has extremely little context and knows little about the purpose, scope and ambition of your project. So when you tell it “do X”, it typically can’t tell whether you mean “do X quick and dirty, I just wand the results asap” or “lay out a 10 step plan to do X in the most sustainable way possible that allows us to eventually reach points Y and Z in the future”. If it gets things wrong in either direction, that tends to be frustrating, but it can’t read your mind (yet).
AI agents that are able to run unit tests and end-2-end tests and see compiler errors are so much more useful than their blind counterparts
If you need some particular piece of software but are unsure if current AIs will be able to deliver, it might make sense to write a detailed, self-contained and as-complete-as-possible specification of it, to then throw it at an AI agent whenever a new model (or scaffolding) comes out. Github Copilot with GPT5 was able to do many more things than I would have imagined, with non-trivial but still relatively limited oversight.
I haven’t tried yet if just letting it to its thing, saying only “continue” after each iteration, may be sufficient. Maybe I put more time into guiding it than would actually be necessary.
That being said: writing a self-contained specification that contains your entire idea of something with all the details nailed down such that there is little room for misunderstandings is surprisingly hard. There are probably cases where just writing the software yourself (if you can) takes less time than fully specifying it.
That being said, “writing down a specification” can also happen interview-style using an AI’s voice mode, so you can do it while doing chores.