Well, I very much doubt that the entire programming world will get access to a four-quintillion-parameter code-generating model within five years. However, I do foresee the descendants of OpenAI Codex getting much more powerful and much more used within that timeframe. After all, Transformers just came out only five years ago, and they’ve definitely come a long way since.
Human culture changes more slowly than AI technology, though, so I expect businesses to begin adopting such models only with great trepidation at first. Programmers will almost certainly need to stick around for verification and validation of generated code for quite some time. More output will be expected out of programmers, for sure, as the technology is adopted, but that probably won’t lead to the elimination of jobs themselves, just as the cotton gin didn’t lead to the end of slavery and the rise of automation didn’t lead to the rise of leisure time.
Eventually though, yes, code generation will be almost universally automated, at least once everyone is comfortable with automated code verification and validation. However, I wouldn’t expect that cultural shift to be complete until at least the early 2030′s. That’s not to say we aren’t in fact running out of time, of course.
Code generation will be almost universally automated
I must note that code generation is already almost universally automated: practically nobody writes assembly, they are almost always generated by compilers, but no, compilers didn’t end the programming.
By “code generating being automated,” I mean that humans will program using natural human language, without having to think about the particulars of data structures and algorithms (or syntax). A good enough LLM can handle all of that stuff itself, although it might ask the human to verify if the resulting program functions as expected.
Maybe the models will be trained to look for edge cases that technically do what the humans asked for but seem to violate the overall intent of the program. In other words, situations where the program follows the letter of the law (i.e., the program specifications) but not the spirit of the law.
Come to think of it, if you could get a LLM to look for such edge cases robustly, it might be able to help RL systems avoid Goodharting, steering the agent to follow the intuitive intent behind a given utility function.
half joking on both counts, though I could probably think through and make a less joking version that has a lot more caveats; obviously neither statement is exactly true as stated
Well, I very much doubt that the entire programming world will get access to a four-quintillion-parameter code-generating model within five years. However, I do foresee the descendants of OpenAI Codex getting much more powerful and much more used within that timeframe. After all, Transformers just came out only five years ago, and they’ve definitely come a long way since.
Human culture changes more slowly than AI technology, though, so I expect businesses to begin adopting such models only with great trepidation at first. Programmers will almost certainly need to stick around for verification and validation of generated code for quite some time. More output will be expected out of programmers, for sure, as the technology is adopted, but that probably won’t lead to the elimination of jobs themselves, just as the cotton gin didn’t lead to the end of slavery and the rise of automation didn’t lead to the rise of leisure time.
Eventually though, yes, code generation will be almost universally automated, at least once everyone is comfortable with automated code verification and validation. However, I wouldn’t expect that cultural shift to be complete until at least the early 2030′s. That’s not to say we aren’t in fact running out of time, of course.
I must note that code generation is already almost universally automated: practically nobody writes assembly, they are almost always generated by compilers, but no, compilers didn’t end the programming.
By “code generating being automated,” I mean that humans will program using natural human language, without having to think about the particulars of data structures and algorithms (or syntax). A good enough LLM can handle all of that stuff itself, although it might ask the human to verify if the resulting program functions as expected.
Maybe the models will be trained to look for edge cases that technically do what the humans asked for but seem to violate the overall intent of the program. In other words, situations where the program follows the letter of the law (i.e., the program specifications) but not the spirit of the law.
Come to think of it, if you could get a LLM to look for such edge cases robustly, it might be able to help RL systems avoid Goodharting, steering the agent to follow the intuitive intent behind a given utility function.
yeah, that’s probably still another 7 years out by my estimate,yeah I mean, I don’t think anyone would reasonably expect it to with the current ratio of who gets gains from tradehalf joking on both counts, though I could probably think through and make a less joking version that has a lot more caveats; obviously neither statement is exactly true as stated