The assumption that ChatGPT can’t do more than just write code is already wrong today. It’s decent for telling you options of various packages that might solve your problem and give you pro and cons for each of them.
Given the way ChatGPT works in particular, it’s bad at reading through existing code and finding a bug. As work is going on to move from a “one prompt”-”one answer model” to giving agent instructions and the agent taking multiple actions in succession, it can read through the code to search for the bug.
OpenAI already had WebGPT that was an agent that could go out and read on the web to find an sources to have a good answer that’s just not hallucinated. At the moment it’s quite unclear what a model that can freely browse the documentation and existing code and write new code will be able to do.
The assumption that ChatGPT can’t do more than just write code is already wrong today. It’s decent for telling you options of various packages that might solve your problem and give you pro and cons for each of them.
Given the way ChatGPT works in particular, it’s bad at reading through existing code and finding a bug. As work is going on to move from a “one prompt”-”one answer model” to giving agent instructions and the agent taking multiple actions in succession, it can read through the code to search for the bug.
OpenAI already had WebGPT that was an agent that could go out and read on the web to find an sources to have a good answer that’s just not hallucinated. At the moment it’s quite unclear what a model that can freely browse the documentation and existing code and write new code will be able to do.