This is not going to be a high quality answer, sorry in advance.
I noticed this with someone in my office who is learning robotic process automation: people are very bad at measuring their productivity, they are better at seeing certain kinds of gains and certain kinds of losses. I know someone who swears emphatically that they are many times as productive but have become almost totally unreliable. He’s in denial over it, and a couple people now have openly told me they try to remove him from workflows for all the problems he causes.
I think the situation is like this:
If you finish a task very quickly using automated methods, that feels viscerally great and, importantly, is very visible. If your work then incurs time costs later, you might not be able to trace that extra cost to the “automated” tasks you set up earlier, double so if those costs are absorbed by other people catching what you missed and correcting your mistakes, or doing the things that used to be done when you were doing it manually.
I imagine it is hard to track a bug and know, for certain, that you had to waste that time because you used an LLM instead of just doing it yourself. You don’t know who else had to waste time fixing your problem because LLM code is spaghetti, or at least you don’t feel it in your bones in the same way you feel increases in your output, you don’t get to see the counterfactual project where things just went better in intangible ways because you didn’t outsource your thinking to gpt. Few people notice, after the fact, how many problems they incurred because of a specific thing they did.
I think LLM usage is almost ubiquitous at this point, if it were conveying big benefits it would show more clearly. If everyone is saying they are 2x more productive (which is kinda low by some testimonies) then it is probably the case that they are just oblivious to the problems they are causing for themselves because they’re just less visible.
This is not going to be a high quality answer, sorry in advance.
I noticed this with someone in my office who is learning robotic process automation: people are very bad at measuring their productivity, they are better at seeing certain kinds of gains and certain kinds of losses. I know someone who swears emphatically that they are many times as productive but have become almost totally unreliable. He’s in denial over it, and a couple people now have openly told me they try to remove him from workflows for all the problems he causes.
I think the situation is like this:
If you finish a task very quickly using automated methods, that feels viscerally great and, importantly, is very visible. If your work then incurs time costs later, you might not be able to trace that extra cost to the “automated” tasks you set up earlier, double so if those costs are absorbed by other people catching what you missed and correcting your mistakes, or doing the things that used to be done when you were doing it manually.
I imagine it is hard to track a bug and know, for certain, that you had to waste that time because you used an LLM instead of just doing it yourself. You don’t know who else had to waste time fixing your problem because LLM code is spaghetti, or at least you don’t feel it in your bones in the same way you feel increases in your output, you don’t get to see the counterfactual project where things just went better in intangible ways because you didn’t outsource your thinking to gpt. Few people notice, after the fact, how many problems they incurred because of a specific thing they did.
I think LLM usage is almost ubiquitous at this point, if it were conveying big benefits it would show more clearly. If everyone is saying they are 2x more productive (which is kinda low by some testimonies) then it is probably the case that they are just oblivious to the problems they are causing for themselves because they’re just less visible.