I was part of the study actually. For me, I think a lot of the productivity gains were lost from starting to look at some distraction while waiting for the LLM and then being “afk” for a lot longer than the prompt took to wrong. However! I just discovered that Cursor has exactly the feature I wanted them to have: a bell that rings when your prompt is done. Probably that alone is worth 30% of the gains.
Other than that, the study started in February (?). The models have gotten a lot better in just the past few months such that even if the study was true for the average time it was run, I don’t expect it to be true now or in another three months (unless the devs are really bad at using AI actually or something).
Subjectively, I spend less time now trying to wrangle a solution out of them and a lot more it works pretty quickly.
Did you mean to reply to that parent?
I was part of the study actually. For me, I think a lot of the productivity gains were lost from starting to look at some distraction while waiting for the LLM and then being “afk” for a lot longer than the prompt took to wrong. However! I just discovered that Cursor has exactly the feature I wanted them to have: a bell that rings when your prompt is done. Probably that alone is worth 30% of the gains.
Other than that, the study started in February (?). The models have gotten a lot better in just the past few months such that even if the study was true for the average time it was run, I don’t expect it to be true now or in another three months (unless the devs are really bad at using AI actually or something).
Subjectively, I spend less time now trying to wrangle a solution out of them and a lot more it works pretty quickly.