Basically every time a new model is released by a major lab, I hear from at least one person (not always the same person) that it’s a big step forward in programming capability/usefulness. And then David gives it a try, and it works qualitatively the same as everything else: great as a substitute for stack overflow, can do some transpilation if you don’t mind generating kinda crap code and needing to do a bunch of bug fixes, and somewhere between useless and actively harmful on anything even remotely complicated.
It would be nice if there were someone who tries out every new model’s coding capabilities shortly after they come out, reviews it, and gives reviews with a decent chance of actually matching David’s or my experience using the thing (90% of which will be “not much change”) rather than getting all excited every single damn time. But also, to be a useful signal, they still need to actually get excited when there’s an actually significant change. Anybody know of such a source?
EDIT-TO-ADD: David has a comment below with a couple examples of coding tasks.
My guess is neither of you is very good at using them, and getting value out of them somewhat scales with skill.
Models can easily replace on the order of 50% of my coding work these days, and if I have any major task, my guess is I quite reliably get 20%-30% productivity improvements out of them. It does take time to figure out at which things they are good at, and how to prompt them.
I think you’re right, but I rarely hear this take. Probably because “good at both coding and LLMs” is a light tail end of the distribution, and most of the relative value of LLMs in code is located at the other, much heavier end of “not good at coding” or even “good at neither coding nor LLMs”.
(Speaking as someone who didn’t even code until LLMs made it trivially easy, I probably got more relative value than even you.)
Note this 50% likely only holds if you are using a main stream language. For some non-main stream language I have gotten responses that where really unbelivably bad. Things like “the name of this variable wrong” which literally could never be the problem (it was a valid identifier).
And similarly, if you are trying to encode novel concepts, it’s very different from gluing together libraries, or implementing standard well known tasks, which I would guess is what habryka is mostly doing (not that this is a bad thing to do).
I do use LLMs for coding assistance every time I code now, and I have in fact noticed improvements in the coding abilities of the new models, but I basically endorse this. I mostly make small asks of the sort that sifting through docs or stack-overflow would normally answer. When I feel tempted to make big asks of the models, I end up spending more time trying to get the LLMs to get the bugs out than I’d have spent writing it all myself, and having the LLM produce code which is “close but not quite and possibly buggy and possibly subtly so” that I then have to understand and debug could maybe save time but I haven’t tried because it is more annoying than just doing it myself.
If someone has experience using LLMs to substantially accelerate things of a similar difficulty/flavor to transpilation of a high-level torch module into a functional JITable form in JAX which produces numerically close outputs, or implementation of a JAX/numpy based renderer of a traversable grid of lines borrowing only the window logic from, for example, pyglet (no GLSL calls, rasterize from scratch,) with consistent screen-space pixel width and fade-on-distance logic, I’d be interested in seeing how you do your thing. I’ve done both of these, with and without LLM help and I think leaning hard on the LLMs took me more time rather than less.
File I/O and other such ‘mundane’ boilerplate-y tasks work great right off the bat, but getting the details right on less common tasks still seems pretty hard to elicit from LLMs. (And breaking it down into pieces small enough for them to get it right is very time consuming and unpleasant.)
I find them quite useful despite being buggy. I spend about 40% of my time debugging model code, 50% writing my own code, and 10% prompting.
Having a planning discussion first with s3.6, and asking it to write code only after 5 or more exchanges works a lot better.
Also helpful is asking for lots of unit tests along the way yo confirm things are working as you expect.
Two guesses on what’s going on with your experiences:
You’re asking for code which involves uncommon mathematics/statistics. In this case, progress on scicodebench is probably relevant, and it indeed shows remarkably slow improvement. (Many reasons for this, one relatively easy thing to try is to breakdown the task, forcing the model to write down the appropriate formal reasoning before coding anything. LMs are stubborn about not doing CoT for coding, even when it’s obviously appropriate IME)
You are underspecifying your tasks (and maybe your questions are more niche than average), or otherwise prompting poorly, in a way which a human could handle but models are worse at. In this case sitting down with someone doing similar tasks but getting more use out of LMs would likely help.
We did end up doing a version of this test. A problem came up in the course of our work which we wanted an LLM to solve (specifically, refactoring some numerical code to be more memory efficient). We brought in Ray, and Ray eventually concluded that the LLM was indeed bad at this, and it indeed seemed like our day-to-day problems were apparently of a harder-for-LLMs sort than he typically ran into in his day-to-day.
A thing unclear from the interaction: it had seemed towards the end that “build a profile to figure out where the bottleneck is” was one of the steps towards figuring out the problem, and that the LLM was (or might have been) better at that part. And, maybe models couldn’t solve you entire problem wholesale but there was still potential skills in identifying factorable pieces that were better fits for models.
Interesting! Two yet more interesting versions of the test:
Someone who currently gets use from LLMs writing more memory-efficient code, though maybe this is kind of question-begging
Someone who currently gets use from LLMs, and also is pretty familiar with trying to improve the memory efficiency of their code (which maybe is Ray, idk)
Maybe you include this in “stack overflow substitute”, but the main thing I use LLMs for is to understand well known technical things. The workflow is: 1) I am interested in understanding something, e.g. how a multiplexed barrel bit shifter works. 2) I ask the LLM to explain the concept. 3) Based on the initial response I create seperate conversation branches with questions I have (to save money and have the context be closer. Didn’t evaluate if this actually makes the LLM better.). 4) Once I think I understood the concept or part of the concept I explain it to GPT. (Really I do this all the time during the entire process.) 5) The LLM (hopeful) corrects me if I am wrong (it seems it detects mistakes more often than not).
The last part of the conversation can then looks like this:
User: Ah, now I think I understand. You have mechanism that shifts by a fixed ammount (powers of 2). Each of these multiplexers can be set up such that the data flows through them, or though an ID wire, into the next multiplexer. Now by controlling whether the value flows through the multiplexer or the ID wire, we can “sum up the shifts”. Specifically if we have shifters of 1, 2, 4, …, 32, then we can achive any shift (for 64-bit numbers) by activating the appropirate multiplexers. This design has not 64^64 gates, but instead 64^6 gates. 64 gates for each multiplexer.
AI: Exactly, you’ve got the concept! The design you described efficiently uses a series of multiplexers to perform shifts by fixed amounts that are powers of two (1, 2, 4, …, 32 for a 64-bit number). By selecting which multiplexers to activate, you can achieve any desired shift amount.
I had probably ~200,000 words worth of conversation with LLMs, mainly in this format.
I am not sure what next leap you are talking about. But I intuit based on some observations that GPT-4o is much better for this than GPT-3 (you might talk about more recent “leaps”). (Didn’t test o1 extensively because it’s so expensive).
Have you tried to make a mistake in your understanding on purpose to test out whether it would correct you or agree with you even when you’d get it wrong?
(and if yes, was it “a few times” or “statistically significant” kinda test, please?)
Why don’t you run the test yourself seems very easy?
Yes it does catch me when I am saying wrong things quite often. It also quite often says things that are not correct and I correct it, and if I am right it usually agrees immediately.
Interesting—the first part of the response seems to suggest that it looked like I was trying to understand more about LLMs… Sorry for confusion, I wanted to clarify an aspect of your worflow that was puzzling to me. I think I got all info for what I was asking about, thanks!
FWIW, if the question was an expression of actual interest and not a snarky suggestion, my experience with chatbots has been positive for brainstorming, dictionary “search”, rubber ducking, description of common sense (or even niche) topics, but disappointing for anything that requires application of commons sense. For programmming, one- or few-liner autocomplete is fine for me—then it’s me doing the judgement, half of the suggestions are completely useless, half are fine, and the third half look fine at first before I realise I needed the second most obvious thing this time.. but it can save time for the repeating part of almost-repeating stuff. For multi file editing,, I find it worse than useless when it feels like doing code review after a psychopath pretending to do programming (AFAICT all models can explain everything most stuff correctly and then write the wrong code anyway .. I don’t find it useful when it tries to appologize later if I point it out or to pre-doubt itself in CoT in 7 paragraphs and then do it wrong anyway) - I like to imagine as if it was trained on all code from GH PRs—both before and after the bug fix… or as if it was bored, so it’s trying to insert drama into a novel about my stupid programming task, when the second chapter will be about heroic AGI firefighting the shit written by previous dumb LLMs...
I don’t use it to write code, or really anything. Rather I find it useful to converse with it. My experience is also that half is wrong and that it makes many dumb mistakes. But doing the conversation is still extremely valuable, because GPT often makes me aware of existing ideas that I don’t know. Also like you say it can get many things right, and then later get them wrong. That getting right part is what’s useful to me. The part where I tell it to write all my code is just not a thing I do. Usually I just have it write snippets, and it seems pretty good at that.
Overall I am like “Look there are so many useful things that GPT tells me and helps me think about simply by having a conversation”. Then somebody else says “But look it get’s so many things wrong. Even quite basic things.” And I am like “Yes, but the useful things are still useful that overall it’s totally worth it.”
One thing I’ve noticed is that current models like Claude 3.5 Sonnet can now generate non-trivial 100-line programs like small games that work in one shot and don’t have any syntax or logical errors. I don’t think that was possible with earlier models like GPT-3.5.
My impression is that they are getting consistently better at coding tasks of a kind that would show up in the curriculum of an undergrad CS class, but much more slowly improving at nonstandard or technical tasks.
Regarding coding in general, I basically only prompt programme these days. I only bother editing the actual code when I notice a persistent bug that the models are unable to fix after multiple iterations.
I don’t know jackshit about web development and have been making progress on a dashboard for alignment research with very little effort. Very easy to build new projects quickly. The difficulty comes when there is a lot of complexity in the code. It’s still valuable to understand how high-level things work and low-level things the model will fail to proactively implement.
While Carl Brown said (a few times) he doesn’t want to do more youtube videos for every new disappointing AI release, so far he seems to be keeping tabs on them in the newsletter just fine—https://internetofbugs.beehiiv.com/
...I am quite confident that if anything actually started to work, he would comment on it, so even if he won’t say much about any future incremental improvements, it might be a good resource to subscribe to for getting better signal—if Carl will get enthusiastic about AI coding assistants, it will be worth paying attention.
Basically every time a new model is released by a major lab, I hear from at least one person (not always the same person) that it’s a big step forward in programming capability/usefulness. And then David gives it a try, and it works qualitatively the same as everything else: great as a substitute for stack overflow, can do some transpilation if you don’t mind generating kinda crap code and needing to do a bunch of bug fixes, and somewhere between useless and actively harmful on anything even remotely complicated.
It would be nice if there were someone who tries out every new model’s coding capabilities shortly after they come out, reviews it, and gives reviews with a decent chance of actually matching David’s or my experience using the thing (90% of which will be “not much change”) rather than getting all excited every single damn time. But also, to be a useful signal, they still need to actually get excited when there’s an actually significant change. Anybody know of such a source?
EDIT-TO-ADD: David has a comment below with a couple examples of coding tasks.
My guess is neither of you is very good at using them, and getting value out of them somewhat scales with skill.
Models can easily replace on the order of 50% of my coding work these days, and if I have any major task, my guess is I quite reliably get 20%-30% productivity improvements out of them. It does take time to figure out at which things they are good at, and how to prompt them.
I think you’re right, but I rarely hear this take. Probably because “good at both coding and LLMs” is a light tail end of the distribution, and most of the relative value of LLMs in code is located at the other, much heavier end of “not good at coding” or even “good at neither coding nor LLMs”.
(Speaking as someone who didn’t even code until LLMs made it trivially easy, I probably got more relative value than even you.)
Sounds plausible. Is that 50% of coding work that the LLMs replace of a particular sort, and the other 50% a distinctly different sort?
Note this 50% likely only holds if you are using a main stream language. For some non-main stream language I have gotten responses that where really unbelivably bad. Things like “the name of this variable wrong” which literally could never be the problem (it was a valid identifier).
And similarly, if you are trying to encode novel concepts, it’s very different from gluing together libraries, or implementing standard well known tasks, which I would guess is what habryka is mostly doing (not that this is a bad thing to do).
I do use LLMs for coding assistance every time I code now, and I have in fact noticed improvements in the coding abilities of the new models, but I basically endorse this. I mostly make small asks of the sort that sifting through docs or stack-overflow would normally answer. When I feel tempted to make big asks of the models, I end up spending more time trying to get the LLMs to get the bugs out than I’d have spent writing it all myself, and having the LLM produce code which is “close but not quite and possibly buggy and possibly subtly so” that I then have to understand and debug could maybe save time but I haven’t tried because it is more annoying than just doing it myself.
If someone has experience using LLMs to substantially accelerate things of a similar difficulty/flavor to transpilation of a high-level torch module into a functional JITable form in JAX which produces numerically close outputs, or implementation of a JAX/numpy based renderer of a traversable grid of lines borrowing only the window logic from, for example, pyglet (no GLSL calls, rasterize from scratch,) with consistent screen-space pixel width and fade-on-distance logic, I’d be interested in seeing how you do your thing. I’ve done both of these, with and without LLM help and I think leaning hard on the LLMs took me more time rather than less.
File I/O and other such ‘mundane’ boilerplate-y tasks work great right off the bat, but getting the details right on less common tasks still seems pretty hard to elicit from LLMs. (And breaking it down into pieces small enough for them to get it right is very time consuming and unpleasant.)
I find them quite useful despite being buggy. I spend about 40% of my time debugging model code, 50% writing my own code, and 10% prompting. Having a planning discussion first with s3.6, and asking it to write code only after 5 or more exchanges works a lot better.
Also helpful is asking for lots of unit tests along the way yo confirm things are working as you expect.
Two guesses on what’s going on with your experiences:
You’re asking for code which involves uncommon mathematics/statistics. In this case, progress on scicodebench is probably relevant, and it indeed shows remarkably slow improvement. (Many reasons for this, one relatively easy thing to try is to breakdown the task, forcing the model to write down the appropriate formal reasoning before coding anything. LMs are stubborn about not doing CoT for coding, even when it’s obviously appropriate IME)
You are underspecifying your tasks (and maybe your questions are more niche than average), or otherwise prompting poorly, in a way which a human could handle but models are worse at. In this case sitting down with someone doing similar tasks but getting more use out of LMs would likely help.
I would contribute to a bounty for y’all to do this. I would like to know whether the slow progress is prompting-induced or not.
We did end up doing a version of this test. A problem came up in the course of our work which we wanted an LLM to solve (specifically, refactoring some numerical code to be more memory efficient). We brought in Ray, and Ray eventually concluded that the LLM was indeed bad at this, and it indeed seemed like our day-to-day problems were apparently of a harder-for-LLMs sort than he typically ran into in his day-to-day.
A thing unclear from the interaction: it had seemed towards the end that “build a profile to figure out where the bottleneck is” was one of the steps towards figuring out the problem, and that the LLM was (or might have been) better at that part. And, maybe models couldn’t solve you entire problem wholesale but there was still potential skills in identifying factorable pieces that were better fits for models.
Interesting! Two yet more interesting versions of the test:
Someone who currently gets use from LLMs writing more memory-efficient code, though maybe this is kind of question-begging
Someone who currently gets use from LLMs, and also is pretty familiar with trying to improve the memory efficiency of their code (which maybe is Ray, idk)
Maybe you include this in “stack overflow substitute”, but the main thing I use LLMs for is to understand well known technical things. The workflow is: 1) I am interested in understanding something, e.g. how a multiplexed barrel bit shifter works. 2) I ask the LLM to explain the concept. 3) Based on the initial response I create seperate conversation branches with questions I have (to save money and have the context be closer. Didn’t evaluate if this actually makes the LLM better.). 4) Once I think I understood the concept or part of the concept I explain it to GPT. (Really I do this all the time during the entire process.) 5) The LLM (hopeful) corrects me if I am wrong (it seems it detects mistakes more often than not).
The last part of the conversation can then looks like this:
I had probably ~200,000 words worth of conversation with LLMs, mainly in this format.
I am not sure what next leap you are talking about. But I intuit based on some observations that GPT-4o is much better for this than GPT-3 (you might talk about more recent “leaps”). (Didn’t test o1 extensively because it’s so expensive).
Have you tried to make a mistake in your understanding on purpose to test out whether it would correct you or agree with you even when you’d get it wrong?
(and if yes, was it “a few times” or “statistically significant” kinda test, please?)
Why don’t you run the test yourself seems very easy?
Yes it does catch me when I am saying wrong things quite often. It also quite often says things that are not correct and I correct it, and if I am right it usually agrees immediately.
Interesting—the first part of the response seems to suggest that it looked like I was trying to understand more about LLMs… Sorry for confusion, I wanted to clarify an aspect of your worflow that was puzzling to me. I think I got all info for what I was asking about, thanks!
FWIW, if the question was an expression of actual interest and not a snarky suggestion, my experience with chatbots has been positive for brainstorming, dictionary “search”, rubber ducking, description of common sense (or even niche) topics, but disappointing for anything that requires application of commons sense. For programmming, one- or few-liner autocomplete is fine for me—then it’s me doing the judgement, half of the suggestions are completely useless, half are fine, and the third half look fine at first before I realise I needed the second most obvious thing this time.. but it can save time for the repeating part of almost-repeating stuff. For multi file editing,, I find it worse than useless when it feels like doing code review after a psychopath pretending to do programming (AFAICT all models can explain
everythingmost stuff correctly and then write the wrong code anyway .. I don’t find it useful when it tries to appologize later if I point it out or to pre-doubt itself in CoT in 7 paragraphs and then do it wrong anyway) - I like to imagine as if it was trained on all code from GH PRs—both before and after the bug fix… or as if it was bored, so it’s trying to insert drama into a novel about my stupid programming task, when the second chapter will be about heroic AGI firefighting the shit written by previous dumb LLMs...I don’t use it to write code, or really anything. Rather I find it useful to converse with it. My experience is also that half is wrong and that it makes many dumb mistakes. But doing the conversation is still extremely valuable, because GPT often makes me aware of existing ideas that I don’t know. Also like you say it can get many things right, and then later get them wrong. That getting right part is what’s useful to me. The part where I tell it to write all my code is just not a thing I do. Usually I just have it write snippets, and it seems pretty good at that.
Overall I am like “Look there are so many useful things that GPT tells me and helps me think about simply by having a conversation”. Then somebody else says “But look it get’s so many things wrong. Even quite basic things.” And I am like “Yes, but the useful things are still useful that overall it’s totally worth it.”
Maybe for your use case try codex.
One thing I’ve noticed is that current models like Claude 3.5 Sonnet can now generate non-trivial 100-line programs like small games that work in one shot and don’t have any syntax or logical errors. I don’t think that was possible with earlier models like GPT-3.5.
My impression is that they are getting consistently better at coding tasks of a kind that would show up in the curriculum of an undergrad CS class, but much more slowly improving at nonstandard or technical tasks.
I’d be down to do this. Specifically, I want to do this, but I want to see if the models are qualitatively better at alignment research tasks.
In general, what I’m seeing is that there is not big jump with o1 Pro. However, it is possibly getting closer to one-shot a website based on a screenshot and some details about how the user likes their backend setup.
In the case of math, it might be a bigger jump (especially if you pair it well with Sonnet).
Regarding coding in general, I basically only prompt programme these days. I only bother editing the actual code when I notice a persistent bug that the models are unable to fix after multiple iterations.
I don’t know jackshit about web development and have been making progress on a dashboard for alignment research with very little effort. Very easy to build new projects quickly. The difficulty comes when there is a lot of complexity in the code. It’s still valuable to understand how high-level things work and low-level things the model will fail to proactively implement.
While Carl Brown said (a few times) he doesn’t want to do more youtube videos for every new disappointing AI release, so far he seems to be keeping tabs on them in the newsletter just fine—https://internetofbugs.beehiiv.com/
...I am quite confident that if anything actually started to work, he would comment on it, so even if he won’t say much about any future incremental improvements, it might be a good resource to subscribe to for getting better signal—if Carl will get enthusiastic about AI coding assistants, it will be worth paying attention.