The “attack dog” metaphor is definitely sticky but personally I’ve found that you can build the fence directly INTO the task definition, so the dog runs free but in a “directed” and “constrained” path.
When context is ambiguous and objective (the “why”) isn’t clear enough, it doesn’t matter how much the model is smart, problems are going to arise and technical debt is certain. If you invest in complete and crystal-clear specs your chances that the plans come out right increases in a significant way (the so called Context Engineering).
I ran this as an experiment: I’ve invested about 80% of my time into specs building, then code generation became literally automatic. As a result Claude produced 7 modules (46 endpoints) in 4.5 hours (including testing), and the code was virtually bug free and production-ready. The spec was so complete and all hard decisions already made that Claude didn’t have to guess, having in this way more time to do the right things instead wasting time in trying to decide a course (a task where it’s often wrong).
Of course you lose the vibe coding exploration, but that experience moves upstream: into the process of spec-building. The hard thinking still happens, but just earlier and with a sizable prize: predictability and peace of mind for the “day two” onward.
I’d be interested to see a write-up of your experience doing this. My own experience with spec-driven development hasn’t had so much success. I’ve found that the models tend to have trouble sticking to the spec.
With great pleasure! The experience was so revealing that led me to codifying the process and prepare the Stream Coding Manifesto (available on GitHub here: Stream Coding), just launched last month btw.
I’ve also created the relative Claude Skill in order to make it immediately actionable, downloadable as well via GitHub.
The “attack dog” metaphor is definitely sticky but personally I’ve found that you can build the fence directly INTO the task definition, so the dog runs free but in a “directed” and “constrained” path.
When context is ambiguous and objective (the “why”) isn’t clear enough, it doesn’t matter how much the model is smart, problems are going to arise and technical debt is certain. If you invest in complete and crystal-clear specs your chances that the plans come out right increases in a significant way (the so called Context Engineering).
I ran this as an experiment: I’ve invested about 80% of my time into specs building, then code generation became literally automatic. As a result Claude produced 7 modules (46 endpoints) in 4.5 hours (including testing), and the code was virtually bug free and production-ready. The spec was so complete and all hard decisions already made that Claude didn’t have to guess, having in this way more time to do the right things instead wasting time in trying to decide a course (a task where it’s often wrong).
Of course you lose the vibe coding exploration, but that experience moves upstream: into the process of spec-building. The hard thinking still happens, but just earlier and with a sizable prize: predictability and peace of mind for the “day two” onward.
I’d be interested to see a write-up of your experience doing this. My own experience with spec-driven development hasn’t had so much success. I’ve found that the models tend to have trouble sticking to the spec.
With great pleasure! The experience was so revealing that led me to codifying the process and prepare the Stream Coding Manifesto (available on GitHub here: Stream Coding), just launched last month btw.
I’ve also created the relative Claude Skill in order to make it immediately actionable, downloadable as well via GitHub.