Gamblification
When using LLM-based coding assistants, I always had a strange feeling about the interaction. I think I now have a pointer around that feeling -[1] disappointment from having expected more (again and again), followed by low level of disgust, and an aftertaste of disrespect growing into hatred.[2]
But why hatredâI didnât expect that color of emotion to be in thereâwhere does it come from? A hypothesis comes to mind đśď¸ - I hate gambling,[3] LLMs are stochastic, and whenever I got a chance to observe people for whom LLMs worked well it seemed like addiction.
From another angle, when doing code review, it motivates me when itâs a learning opportunity for both parties. Lately, it feels like the other âpersonâ is an incompetent lying psychopath who can copy&paste around and so far has been lucky to only work on toy problems so no one noticed they donât understand programming .. at all. Not a single bit. They are even actively anti-curious. Just roleplayingâfake it till you make it, but they didnât make it yet.[4]
I recently lost my love of programming and I havenât found my way back, yet. I blame LLMs for that. And post-capitalist incentives towards SF style disruptive network-effect monopolization, enshitification of platforms, gamification of formerly-nuanced human interactions. And companies and doomers that over-hype current âAIââI believe most of the future risks and some of the future potential, and that the current state of the art is amazing, a technological wonder, magic come real ⌠so how the heck did it happen that the hype has risen so much higher than reality for so long when reality is so good?!?
I want progressive type systems and documentation on hover, I donât want to swap the if condition while refactoring and then âfixâ the test. I donât want to skip 1 line when migrating from JSON to some random format. I donât want *+ instead of + in my regexp. A random file deleted. Messing up with .gitignore. Million other inconveniences. Have you seen how people un-minified Claude Codeâthe sheer amount of workarounds, cringe IMPORTANT inside the system prompt and the constant reminders? Have you seen people seeing it and concluding that itâs proof that âit worksâ?!?
I believe that when companies ask software developers to improve their âproductivityâ by using LLM coding assistants, it is the next stage of enshitificationâhook users first, then squeeze users to satisfy customers, then milk customers to make (a promise of) profits, now make the employees addicted to the gambling tools in order to collect data for automating the employees later. The programmers and the managers alike.
Gamblification. A machine for losing the coding game after which only AGI will be able to save us from all the code slop. Opinions?
- ^
If you prefer em dashes, please use your imaginationâI donât like them.
- ^
Similar to an interview when someone asks about a random tangent that is obviously-related to the previous topic, the candidate explains it totally fine but in a way disconnected from the previous topic.. so you then ask for an example and it goes completely off the rails (as if then had no experience with practicalities around that topic, just reciting buzzword definitions).
Or when talking to a high-ranking stakeholder and all goes well until they agree with you about a wrong detail, so you ask them for an opinion about a core point and you realize they donât seem to have a clue, they were just vibe problem solving the whole time. Especially when they donât even see it in your eyes they just made a mistake (I mean good managers donât have to be domain experts, but they always have a desire to learn more and social skills to recognize when you think that you have the info they need.. bad managers just make a decision when asked to make a decision).
- ^
Rolling a physical dice is fine for me and I enjoy The Settlers of Catan or Dostihy a SĂĄzky (Czech version of Monopoly that is far superior). Iâm not exactly in love with slot machines[5], loot boxes, mobile games, or D&D clones. But (post-)modern sports betting and high-frequency trading are outright evil.
- ^
I finally understand the stochastic parrot metaphorâthere is nothing wrong about Markov processes, parrots are highly intelligent animals, pattern matching is not âjust pattern matchingâ but a real quality, and a calculator is superhuman in long division ⌠but the [something something ÂŻ\_(ă)_/âÂŻ understanding/âthinking/âreasoning] is supposed to be more than?!?
- ^
BTW they are called âwinning machinesâ in my local language(s), and I bet that they would be less of a social problem if they were called âlosing machinesâ (to match the median outcome).
spotted in an unrelated discord, looks like Iâm not the only person who noticed the similarity đ
Yeah, I kinda get it. Not to the point of hatred, but I do find interacting with LLMs⌠mentally taxing. They pass as just enough of a âwell-meaning eagerly helpful personâ to make me not want to be mean to them (as itâd make me feel bad), but they also continually induce weary disappointment in me.
I wish we figured out some other interface over the base models that is not these âAI assistantâ personas. I donât know what thatâd be, but surely something better is possible. Something framed as an impersonal call to a database equipped with a powerful program/âknowledge synthesis tool, maybe.
This prompted me to write up about my recent experience with it, see here.
I suspect that the near future of programming is that you will be expected to produce a lot of code fast because thatâs the entire point of having an AI, you just tell it what to do and it generates the code, it would be stupid to pay a human to do it slowly instead⌠but you will also be blamed for everything the AI does wrong, and expected to fix it. And you will be expected to fix it fast, by telling the AI to fix it, or maybe by throwing all the existing code away and generating it again from scratch, because it would be stupid to pay a human to do it slowly instead⌠and you will also be blamed if the AI fails to fix it, or if the new version has different new flaws.
It will be like being employed to gamble, and if you donât make your quota of jackpots per month, you get fired and replaced by a better gambler.
I also wish there was no industry that would serve as an example for that employment model...
nah đ, the stupid companies will self-select out of the job market for not-burned-out good programmers, and the good companies will do something like âproduct engineeringâ when product managers and designers will make their own PoCs to validate with stakeholders before/âwithout endless specifications handed over to engineers in the first iteration, and then the programming roles will focus on building production quality solutions and maybe a QA renaissance will happen to write useful regression tests when domain experts can use coding assistants to automate boring stuff and focus on domain expertise/âmaking decisions instead of programmers trying to guess the indent behind a written specification twice (for the code and for the test.. or once when itâs the same person/âLLM writing both, which is a recipe for useless tests IMHO)
(..not making a prediction here, more like a wish TBH)
One the one hand, I understand this approach (firing programmers that donât use AI), but on the other hand, this is like firing someone for using emacs rather than PyCharm (or VSCode or whatever is the new hot). Itâs sad that it looks like people are going back to being measured by lines of code.
Iâd like to be able to just provide high level overviews of what I want done, then have an army of agents do it, but it doesnât seem to be there yet (could be skill issue on my part, though).
so, the end of âagileâ?
No, noâyou misunderstand agile! See, in our company, we have AIgile! Itâs a slightly modified version of agile which we find to be even better! The scrum master is GTP-4o, which is known to be wonderful at encouraging and building team spirit.
I have yet to have worked in a company that actually implemented agile, rather than saying they use agile, but then inventing their own complicated system involving lots of documentation and meetings.
I actually have Scrum Master training⌠so I can confidently say that most companies that claim to do Scrum are actually closer to doing its opposite.
retrospectiveâeither not doing it at all because it is a waste of time (yeah, remove the only part of the process where the developers can provide feedback about the process, and then be surprised that the process sucks), or they are doing some perverted reverse-retrospective where instead of developers giving feedback to the company, it is company giving feedback to developers, usually that they should work faster and make fewer mistakes
there is no âJiraâ in the Scrum Guide
there is no âConfluenceâ in the Scrum Guide
and the retrospective is exactly the place where the developers should say âJira and Confluence suck, we want some tools that actually work insteadâ, but I already mentioned the retrospective
there are no managers in Scrum; and no, it is not about renaming the managerâs role, but about giving the team autonomy
the only deadlines in Scrum are the ones negotiated at sprint planning, and not in the sense of âmanagement says that the deadline is after two sprints, but you have the freedom to choose which part you implement during the first sprint, and which part during the second oneâ
I only experienced something like actual Scrum once, when a small department in a larger company decided to actually try Scrum by the textbook. It was fun while it lasted. Ironically, we had to stop because the entire company decided to switch to âScrumâ, and we were told to do it âproperlyâ. (That meant no more wasting time on retrospectives; sprints need to be two weeks long because more sprints = more productivity; etc.)
I find it very amusing that everyone proudly boasts of being âAgileâ when they have all the nimble swiftness in decision and action of a 1975 Soviet Russia agricultural committee.
There are a lot of places where I find a lot of use from LLMs, but itâs usually grunt work that would take me a couple of hours but isnât hard, like tests, or simple shuffling things around. Itâs also quite good at âhereâs a bunch of linting errorsâfix themâ, though of course you have to be constantly vigilant.
With real code, Iâll sometimes use it as a rubber duck or ask it to come up with some preliminary version of what I want, then refactor it mercilessly. With preexisting code I usually need to tell it what to do, or at least point at which files it should look at (this helps quite a lot). With anything complicated, itâs usually faster for me to just do it myself. Could just be a skill issue on my part, of course.
They are getting better. Theyâre still not good, but theyâre often better than a random junior. My main issue is that they donât grow. A rubbish intern will get a lot better, as long as they are capable of learning. Iâm fine starting with someone on a very low level, as long as I wonât have to continuously point out the same mistakes. With an LLM this is frustrating because I have to keep pointing out things I said a couple of turns previously, or which are part of those cringe IMPORTANT prompt notes.
I still prefer to just ask an LLM to write me my CSS :P
One off PoC website/âapp type things are something where LLMs shine. Or dashboards like âhereâs a massive csv file with who knows whatâmake me a nice dashboard thingy I can use to play with the dataâ or âhereâs an APIâmake me a quick n dirty website that allows me to play about with itâ.
Having used Cursor and VSCode with Github Copilot I feel like a huge part of the problem here isnât even the LLMs per se: itâs the UX.
The default here is âyou get a suggestion whether you asked for it or not, and if you press Tab it gets addedâ. Who even thought that was a good idea? Sometimes I press Tab because I need to indent four spaces, not because I want to write whatever random code the LLM thinks is appropriate. And this seems incredibly wasteful, continuously sending queries to the API, often for stuff I simply donât need, with who knows how big context length that isnât necessary! A huge part of the benefit I get from LLM assistants are simple cases of âhere is a function called a very obvious thing that does exactly that obvious thingâ (which is really nothing more than âgrab an existing code snipped and adapt the names to my conventionsâ), or âfollow this repetitive pattern to do the same thing five timesâ (again a very basic, very context-dependent automatic task). Other high value stuff includes writing docstrings and routine unit tests for simple stuff. Meanwhile when I need to code something that takes a decent amount of thought I am grateful for the best thing that these UX luckily do include, the âsnoozeâ function to just shut the damn thing up for a while.
As I see it, the correct UX for an LLM code assistant would be:
only operate on demand
have a few basic tasks (like âwrite docstringâ) possible to invoke on a given line where your cursor is, grabbing a smart context, using their own specific prompt
ability to define your own custom tasks flexibly, or download them as plugins
But use something like the command palette Ctrl+Shift+P for it. Thereâs probably even smarter and more efficient stuff that can be done via clever use of RAG, embedding models, etc. The current approach is among the laziest possible and definitely suffers from problems, yeah.
Hey, I think I share a lot of these emotions. Also left my corp job sometime ago. But the change that happened to me was a bit different: I think I just donât like corporate programming anymore. (Or corporate design, writing and so on.) When I try to make stuff that isnât meant for corporate in any shape or form, I find myself doing that happily. Without using AI of course.
Nice, I hope it will last longer for you than my 2.5 years out of corporate environment ⌠and now observing the worse parts of AI hype in startups too due to investor pressures (as if âeveryoneâ was adding âstupid chatbot wrappersâ to whatever products they try to make .. I hope Iâm exaggerating and I will find some company thatâs not doing the âstupidâ part, but I think I lost hope on not wanting to see AI all around me .. and not literally every idea with LLM in the middle is entirely useless).