Gemini 3.0 has a serious problem for coding. Instead of making the minimal changes I ask for in a code base, it will completely rewrite the code, choosing entirely new algorithms, new variable names, and so on. Most recently, the changes were worse—I’d already settled on a numerically stable approach to computing Pearson correlation coefficients, and Gemini reverted to a numerically unstable method.
I wonder if this is a case of gdm optimising for the destination rather than the journey. Or more concretely, optimising for entirely AI-produced code over coding assistants.
They still have to choose whether to optimize for an AI coding agent that can upgrade established codebases, or whether the vision is to replace ~all existing codebases with AI-generated and AI-maintained code. The former is my use case, auditing and improving existing niche scientific software, but the latter makes more sense to me as a way to get the maximum leverage out of this technology in the long run.
Gemini 3.0 has a serious problem for coding. Instead of making the minimal changes I ask for in a code base, it will completely rewrite the code, choosing entirely new algorithms, new variable names, and so on. Most recently, the changes were worse—I’d already settled on a numerically stable approach to computing Pearson correlation coefficients, and Gemini reverted to a numerically unstable method.
I wonder if this is a case of gdm optimising for the destination rather than the journey. Or more concretely, optimising for entirely AI-produced code over coding assistants.
They still have to choose whether to optimize for an AI coding agent that can upgrade established codebases, or whether the vision is to replace ~all existing codebases with AI-generated and AI-maintained code. The former is my use case, auditing and improving existing niche scientific software, but the latter makes more sense to me as a way to get the maximum leverage out of this technology in the long run.