Programming has already been automated several times. First off, as indicated above, it was automated by moving from actual electronics into machine code. And then machine code was automated by compilers, and then most of the manual busywork of compiled languages was automated by the higher-level languages with GC, OO, and various other acronyms.
In other words, I fully expect that LLM-driven tools for code generation will become a standard and necessary part of the software developers toolkit. But I highly doubt that software development itself will be obsoleted; rather, it will move up to the next level of abstraction and continue from there.
I’m not sure about software engineering as a whole but can I see AI making programming obsolete.
it will move up to the next level of abstraction and continue from there
My worry is that the next level of abstraction above Python is plain english and that anyone will be able to write programs just by asking “Write an app that does X” except they’ll ask the AI that instead of asking a freelance developer.
The historical trend has been that programming becomes easier. But maybe programming will become so easy that everyone can do programming and programmers won’t be needed anymore.
A historical analogy is search which used to be a skilled job that was done by librarians and involved creating logical queries using keywords (e.g. ‘house’ AND ‘car’). Now natural language language search makes it possible for anyone to use Google and we don’t need librarians for search anymore.
The same could happen to programming. Like librarians for search, it seems like programmers are a middleman between the user requesting a feature and the finished software. Historically programming computers has been too difficult for average people but that might not be true for long.
Unless we are assuming truly awesome models able to flawlessly write full-fledged apps of arbitrary complexity without any human editing, I think that you are underestimating how bad the average person is at programming. “Being able to correctly describe an algorithm in plain english” is not a common skill. Even being able to correctly describe a problem is not so common, because the average person doesn’t even know what a programming variable is.
I’ve been in Computer Science classrooms, and even the typical CS student often makes huge mistakes while writing pseudo-codes on paper (which are basically programs in plain english). This has nothing to do with knowing Python syntax, those people are bad at abstract reasoning, and I am quite skeptical that a LLM could do all the abstract reasoning for them.
Strong upvote. I can sort of expect a future where the developer does not need to know C or Python or whatever programming language anymore, and can happily develop very human-readable pseudo-code at a super high level of abstraction. But a future where the developer does not know any algorithm even in theory and just throws LLMs at everything seems just plain stupid. You don’t set up gigantic models if your problem admits a simple linear algorithm.
That doesn’t seem to match history. People gladly do expensive hash table name lookup (as in Python object) even if simple addition (as in C struct) is sufficient. Of course people will setup gigantic models even if the problem admits a simple linear algorithm.
Programming has already been automated several times. First off, as indicated above, it was automated by moving from actual electronics into machine code. And then machine code was automated by compilers, and then most of the manual busywork of compiled languages was automated by the higher-level languages with GC, OO, and various other acronyms.
In other words, I fully expect that LLM-driven tools for code generation will become a standard and necessary part of the software developers toolkit. But I highly doubt that software development itself will be obsoleted; rather, it will move up to the next level of abstraction and continue from there.
I’m not sure about software engineering as a whole but can I see AI making programming obsolete.
My worry is that the next level of abstraction above Python is plain english and that anyone will be able to write programs just by asking “Write an app that does X” except they’ll ask the AI that instead of asking a freelance developer.
The historical trend has been that programming becomes easier. But maybe programming will become so easy that everyone can do programming and programmers won’t be needed anymore.
A historical analogy is search which used to be a skilled job that was done by librarians and involved creating logical queries using keywords (e.g. ‘house’ AND ‘car’). Now natural language language search makes it possible for anyone to use Google and we don’t need librarians for search anymore.
The same could happen to programming. Like librarians for search, it seems like programmers are a middleman between the user requesting a feature and the finished software. Historically programming computers has been too difficult for average people but that might not be true for long.
Unless we are assuming truly awesome models able to flawlessly write full-fledged apps of arbitrary complexity without any human editing, I think that you are underestimating how bad the average person is at programming. “Being able to correctly describe an algorithm in plain english” is not a common skill. Even being able to correctly describe a problem is not so common, because the average person doesn’t even know what a programming variable is.
I’ve been in Computer Science classrooms, and even the typical CS student often makes huge mistakes while writing pseudo-codes on paper (which are basically programs in plain english). This has nothing to do with knowing Python syntax, those people are bad at abstract reasoning, and I am quite skeptical that a LLM could do all the abstract reasoning for them.
Strong upvote. I can sort of expect a future where the developer does not need to know C or Python or whatever programming language anymore, and can happily develop very human-readable pseudo-code at a super high level of abstraction. But a future where the developer does not know any algorithm even in theory and just throws LLMs at everything seems just plain stupid. You don’t set up gigantic models if your problem admits a simple linear algorithm.
That doesn’t seem to match history. People gladly do expensive hash table name lookup (as in Python object) even if simple addition (as in C struct) is sufficient. Of course people will setup gigantic models even if the problem admits a simple linear algorithm.