Gamblification

When using LLM-based coding assistants, I always had a strange feeling about the interaction. I think I now have a pointer around that feeling -[1] disappointment from having expected more (again and again), followed by low level of disgust, and an aftertaste of disrespect growing into hatred.[2]

But why hatred—I didn’t expect that color of emotion to be in there—where does it come from? A hypothesis comes to mind 🌶️ - I hate gambling,[3] LLMs are stochastic, and whenever I got a chance to observe people for whom LLMs worked well it seemed like addiction.

From another angle, when doing code review, it motivates me when it’s a learning opportunity for both parties. Lately, it feels like the other “person” is an incompetent lying psychopath who can copy&paste around and so far has been lucky to only work on toy problems so no one noticed they don’t understand programming .. at all. Not a single bit. They are even actively anti-curious. Just roleplaying—fake it till you make it, but they didn’t make it yet.[4]

I recently lost my love of programming and I haven’t found my way back, yet. I blame LLMs for that. And post-capitalist incentives towards SF style disruptive network-effect monopolization, enshitification of platforms, gamification of formerly-nuanced human interactions. And companies and doomers that over-hype current “AI”—I believe most of the future risks and some of the future potential, and that the current state of the art is amazing, a technological wonder, magic come real … so how the heck did it happen that the hype has risen so much higher than reality for so long when reality is so good?!?

I want progressive type systems and documentation on hover, I don’t want to swap the if condition while refactoring and then “fix” the test. I don’t want to skip 1 line when migrating from JSON to some random format. I don’t want *+ instead of + in my regexp. A random file deleted. Messing up with .gitignore. Million other inconveniences. Have you seen how people un-minified Claude Code—the sheer amount of workarounds, cringe IMPORTANT inside the system prompt and the constant reminders? Have you seen people seeing it and concluding that it’s proof that “it works”?!?

I believe that when companies ask software developers to improve their “productivity” by using LLM coding assistants, it is the next stage of enshitification—hook users first, then squeeze users to satisfy customers, then milk customers to make (a promise of) profits, now make the employees addicted to the gambling tools in order to collect data for automating the employees later. The programmers and the managers alike.

Gamblification. A machine for losing the coding game after which only AGI will be able to save us from all the code slop. Opinions?

  1. ^

    If you prefer em dashes, please use your imagination—I don’t like them.

  2. ^

    Similar to an interview when someone asks about a random tangent that is obviously-related to the previous topic, the candidate explains it totally fine but in a way disconnected from the previous topic.. so you then ask for an example and it goes completely off the rails (as if then had no experience with practicalities around that topic, just reciting buzzword definitions).

    Or when talking to a high-ranking stakeholder and all goes well until they agree with you about a wrong detail, so you ask them for an opinion about a core point and you realize they don’t seem to have a clue, they were just vibe problem solving the whole time. Especially when they don’t even see it in your eyes they just made a mistake (I mean good managers don’t have to be domain experts, but they always have a desire to learn more and social skills to recognize when you think that you have the info they need.. bad managers just make a decision when asked to make a decision).

  3. ^

    Rolling a physical dice is fine for me and I enjoy The Settlers of Catan or Dostihy a Sázky (Czech version of Monopoly that is far superior). I’m not exactly in love with slot machines[5], loot boxes, mobile games, or D&D clones. But (post-)modern sports betting and high-frequency trading are outright evil.

  4. ^

    I finally understand the stochastic parrot metaphor—there is nothing wrong about Markov processes, parrots are highly intelligent animals, pattern matching is not “just pattern matching” but a real quality, and a calculator is superhuman in long division … but the [something something ¯\_(ツ)_/​¯ understanding/​thinking/​reasoning] is supposed to be more than?!?

  5. ^

    BTW they are called “winning machines” in my local language(s), and I bet that they would be less of a social problem if they were called “losing machines” (to match the median outcome).