Having used Cursor and VSCode with Github Copilot I feel like a huge part of the problem here isn’t even the LLMs per se: it’s the UX.
The default here is “you get a suggestion whether you asked for it or not, and if you press Tab it gets added”. Who even thought that was a good idea? Sometimes I press Tab because I need to indent four spaces, not because I want to write whatever random code the LLM thinks is appropriate. And this seems incredibly wasteful, continuously sending queries to the API, often for stuff I simply don’t need, with who knows how big context length that isn’t necessary! A huge part of the benefit I get from LLM assistants are simple cases of “here is a function called a very obvious thing that does exactly that obvious thing” (which is really nothing more than “grab an existing code snipped and adapt the names to my conventions”), or “follow this repetitive pattern to do the same thing five times” (again a very basic, very context-dependent automatic task). Other high value stuff includes writing docstrings and routine unit tests for simple stuff. Meanwhile when I need to code something that takes a decent amount of thought I am grateful for the best thing that these UX luckily do include, the “snooze” function to just shut the damn thing up for a while.
As I see it, the correct UX for an LLM code assistant would be:
only operate on demand
have a few basic tasks (like “write docstring”) possible to invoke on a given line where your cursor is, grabbing a smart context, using their own specific prompt
ability to define your own custom tasks flexibly, or download them as plugins
But use something like the command palette Ctrl+Shift+P for it. There’s probably even smarter and more efficient stuff that can be done via clever use of RAG, embedding models, etc. The current approach is among the laziest possible and definitely suffers from problems, yeah.
Having used Cursor and VSCode with Github Copilot I feel like a huge part of the problem here isn’t even the LLMs per se: it’s the UX.
The default here is “you get a suggestion whether you asked for it or not, and if you press Tab it gets added”. Who even thought that was a good idea? Sometimes I press Tab because I need to indent four spaces, not because I want to write whatever random code the LLM thinks is appropriate. And this seems incredibly wasteful, continuously sending queries to the API, often for stuff I simply don’t need, with who knows how big context length that isn’t necessary! A huge part of the benefit I get from LLM assistants are simple cases of “here is a function called a very obvious thing that does exactly that obvious thing” (which is really nothing more than “grab an existing code snipped and adapt the names to my conventions”), or “follow this repetitive pattern to do the same thing five times” (again a very basic, very context-dependent automatic task). Other high value stuff includes writing docstrings and routine unit tests for simple stuff. Meanwhile when I need to code something that takes a decent amount of thought I am grateful for the best thing that these UX luckily do include, the “snooze” function to just shut the damn thing up for a while.
As I see it, the correct UX for an LLM code assistant would be:
only operate on demand
have a few basic tasks (like “write docstring”) possible to invoke on a given line where your cursor is, grabbing a smart context, using their own specific prompt
ability to define your own custom tasks flexibly, or download them as plugins
But use something like the command palette Ctrl+Shift+P for it. There’s probably even smarter and more efficient stuff that can be done via clever use of RAG, embedding models, etc. The current approach is among the laziest possible and definitely suffers from problems, yeah.