I wonder, in the unlikely case that the AI progress would stop, and we would be left with AIs exactly as smart as they are now, whether that would completely ruin software development.
We would soon have tons of automatically generated software that is difficult for humans to read. People developing new libraries would be under smaller pressure to make them legible, because as long as they can be understood by AIs, who cares. Paying a human to figure this out would be unprofitable, because running the AI thousand times and hoping that it gets it right once would be cheaper. Etc.
Current LLM coding agents are pretty bad at noticing that a new library exists to solve a problem in the first place, and at evaluating whether an unfamiliar library is fit for a given task.
As long as those things remain true, developers of new libraries wouldn’t be under much pressure in any direction, besides “pressure to make the LLM think their library is the newest canonical version of some familiar lib”.
I wonder, in the unlikely case that the AI progress would stop, and we would be left with AIs exactly as smart as they are now, whether that would completely ruin software development.
We would soon have tons of automatically generated software that is difficult for humans to read. People developing new libraries would be under smaller pressure to make them legible, because as long as they can be understood by AIs, who cares. Paying a human to figure this out would be unprofitable, because running the AI thousand times and hoping that it gets it right once would be cheaper. Etc.
Current LLM coding agents are pretty bad at noticing that a new library exists to solve a problem in the first place, and at evaluating whether an unfamiliar library is fit for a given task.
As long as those things remain true, developers of new libraries wouldn’t be under much pressure in any direction, besides “pressure to make the LLM think their library is the newest canonical version of some familiar lib”.