Typically I operationalize “employable as a software engineer” as being capable of completing tasks like:
“Fix this error we’re getting on BetterStack.”
“Move our Redis cache from DigitalOcean to AWS.”
“Add and implement a cancellation feature for ZeroPath scans.”
“Add the results of this evaluation to our internal benchmark.”
These are pretty representative examples of the kinds of tasks your median software engineer will be getting and resolving on a day to day basis.
No chatbot or chatbot wrapper can complete tasks like these for an engineering team at present, incl. Devin et. al. Partly this is because most software engineering work is very high-context, in the sense that implementing the proper solution depends on understanding a large body of existing infrastructure, business knowledge, and code.
When people talk about models today doing “agentic development”, they’re usually explaining its ability to complete small projects in low-context situations, where all you need to understand is the prompt itself and software engineering as a discipline. That makes sense, because if you ask AIs to write (for example) a PONG game in javascript, the AI can complete each of the pieces in one pass, and fit everything it’s doing into one context window. But that kind of task is unlike the vast majority of things employed software engineers do today, which is why we’re not experiencing an intelligence explosion right this second.
Typically I operationalize “employable as a software engineer” as being capable of completing tasks like:
“Fix this error we’re getting on BetterStack.”
“Move our Redis cache from DigitalOcean to AWS.”
“Add and implement a cancellation feature for ZeroPath scans.”
“Add the results of this evaluation to our internal benchmark.”
These are pretty representative examples of the kinds of tasks your median software engineer will be getting and resolving on a day to day basis.
No chatbot or chatbot wrapper can complete tasks like these for an engineering team at present, incl. Devin et. al. Partly this is because most software engineering work is very high-context, in the sense that implementing the proper solution depends on understanding a large body of existing infrastructure, business knowledge, and code.
When people talk about models today doing “agentic development”, they’re usually explaining its ability to complete small projects in low-context situations, where all you need to understand is the prompt itself and software engineering as a discipline. That makes sense, because if you ask AIs to write (for example) a PONG game in javascript, the AI can complete each of the pieces in one pass, and fit everything it’s doing into one context window. But that kind of task is unlike the vast majority of things employed software engineers do today, which is why we’re not experiencing an intelligence explosion right this second.