Basically every time a new model is released by a major lab, I hear from at least one person (not always the same person) that it’s a big step forward in programming capability/usefulness. And then David gives it a try, and it works qualitatively the same as everything else: great as a substitute for stack overflow, can do some transpilation if you don’t mind generating kinda crap code and needing to do a bunch of bug fixes, and somewhere between useless and actively harmful on anything even remotely complicated.
It would be nice if there were someone who tries out every new model’s coding capabilities shortly after they come out, reviews it, and gives reviews with a decent chance of actually matching David’s or my experience using the thing (90% of which will be “not much change”) rather than getting all excited every single damn time. But also, to be a useful signal, they still need to actually get excited when there’s an actually significant change. Anybody know of such a source?
EDIT-TO-ADD: David has a comment below with a couple examples of coding tasks.
I don’t know of one, though I haven’t followed the literature much the past couple years.