Aprillion
Have we became so anti-social that the only 2 options are to do it alone or not at all?
I’m afraid that I do understand your point of view—I feel myself very exhausted for the last few years so I was not helping my friends in open source lately, so they opted for coding assistants instead and now when I see the code I feel recoil from the AI slop and I do not wish to return to the project. If they want things done and I don’t “want” to help, what are their options?
Brave new world we live in, infinite productivity increase from zero to something for people who don’t have time to became good at a craft, burnout for a few of us who used to be good and well paid but became overwhelmed by the ever-ready Waluiging incompetent assistant attractor.
eg when the whole point of function A is to call function B under certain conditions, Claude may just…forget to call function B. and not fix this, after repeated reminders.
aaaah 😱 how are there people who don’t find this completely utterly insane to accept such a behaviour from a coding tool?
for me, it’s like an elevator that “sometimes” jumped half a meter and then refused to go to some floors—I would call the emergency repair line if that happened and not try to excuse it that “it’s so much more convenient than the stairs, even if you have to press the 6th floor button multiple times—it might drive you to the 12th floor first, 4th floor second, but it will almost certainly work on the 3rd try” … and if I broke my leg (~didn’t know how to program in some language), this unreliable elevator would sound MORE scary to me, not lessI think I must be missing some kind of adrenaline enthusiasm that makes me less excited around hype for an incompetent technology that will probably kill us all not long after it gets actually competent … or just generally becoming a grumpy old man.
It’s not that it weakens your point, but it’s that starting a sentence with “It’s not that …” triggers audiences to narrate your writing in “AI voice”. It disintegrates a reader’s brain because just the smell of AI is a noxious fume. Too often it’s a sign of other lurking deficiencies; there’s never just one cockroach in the kitchen.
oh god, I really really hate the self-illustration 😱 enough to think that it’s brilliant? not sure yet...
80-90% are falling behind what exactly, please? to not want to decrease your productivity by 20% and leak customer data sounds like surprisingly rational collective behaviour to me.. probably best to pay for chatbot/coding assistant subscriptions to any employee who wants it since they will use it anyway and “free” tiers are paid by data and integrating any “AI” used to attract investors in the last few years, but do you have statistics that paying customers actually want those AI powered products at nondumping prices? did anyone show any non-self-reported measured increase in productivity (in terms of what the company produces for which their customers pay, not lines of code)? did any early AI-first company other than nvidia report profit numbers instead of just revenue? do early adopters from 5 years ago do better than late adopters from 5 months ago?
tbh, “wait until it starts working” might be a good strategy if there is very little first-mover advantage.. AGI is not here yet, not sure any company can prepare for it by adopting current LLMs “more”
Sounds to me like we always have to calculate a social path integral to a level of approximation appropriate to the situation, even in ask culture… If a friend is lactose intolerant and they know I know that, then even in ask culture it would be weird for me to ask if they want some non-vegan ice cream (and they might assume that if I asked, I would be either joking or offering vegan ice cream, not that I was actively stupid) - so I don’t see the option for 0 echos tbh, just an option to agree that coarse approximation of social consequences is totally fine in most situations and as a default and that mistakes are better on the side of oversimplification rather than overthinking it and not interacting at all.
Or some questions like “May I cut your wrists?” seem like they are almost never appropriate, perhaps as a joke between the right kind of people, or meta level sarcasm when judging how much someone is genuinely into the ask culture thingy.. number of echoes can be a fraction sometimes..
So I would imagine that not seeing public comments as needing more social consideration for more diverse audience than DMs is a mistake even in ask culture worth pointing out to people when it could have been formulated with a better escape hatch..
something went wrong with the link rendering ⇒ https://arxiv.org/pdf/2411.00640
I also wish there was no industry that would serve as an example for that employment model...
nah 🙈, the stupid companies will self-select out of the job market for not-burned-out good programmers, and the good companies will do something like “product engineering” when product managers and designers will make their own PoCs to validate with stakeholders before/without endless specifications handed over to engineers in the first iteration, and then the programming roles will focus on building production quality solutions and maybe a QA renaissance will happen to write useful regression tests when domain experts can use coding assistants to automate boring stuff and focus on domain expertise/making decisions instead of programmers trying to guess the indent behind a written specification twice (for the code and for the test.. or once when it’s the same person/LLM writing both, which is a recipe for useless tests IMHO)
(..not making a prediction here, more like a wish TBH)
spotted in an unrelated discord, looks like I’m not the only person who noticed the similarity 😅
Nice, I hope it will last longer for you than my 2.5 years out of corporate environment … and now observing the worse parts of AI hype in startups too due to investor pressures (as if “everyone” was adding “stupid chatbot wrappers” to whatever products they try to make .. I hope I’m exaggerating and I will find some company that’s not doing the “stupid” part, but I think I lost hope on not wanting to see AI all around me .. and not literally every idea with LLM in the middle is entirely useless).
Gamblification
In case this feedback might be useful—I was unable to read this essay because I don’t remember following concepts mentioned anywhere in the previous ~5 essays: “safe inputs” and “rogue behaviour”.
Especially the word “input” is used in a way that is completely alien to me as a programmer:
Here “inputs” includes all of an AI’s environment/affordances/history, rather than just e.g. the text it is receiving.
(I will wait for a recording of a talk in front of a live audience for this one...)
Yup, learning on the job about microsatellite instabilities, that E2/E6/E7 are gene names and not color dye names, and being able to dive deeper was fun. Politics in a big pharma IT division less so—I didn’t feel my daily activities sufficiently add up to the big picture :( I talked with former colleagues recently and a new project sounded interesting, but it’s another round of stupid hiring freeze at the moment, so not re-joining them at this time.
Probably will end up choosing a job that’s more interesting than easy and settle for a couple of small-meaning items in my life instead of futile search for the one big-meaning thing.
This meta-answer is actually sufficient for my meta-question, so not looking for additional answers at this time (unless you think you have any insight that hasn’t been pointed to yet).
But if useful to expand on it:
4 years ago, you engaged in “fiddling around with your goals and high level strategies until you feel like you have a firm grasp on how to interact with them” / “compressing into a mishmash that includes values, strategies, and ontologies that reinforce each other” / “relating to life”
today, do you still use the same fiddling / mishmash / process for discovering affordances of relating?
if the process is invariant on reflection, do you still have the same meaning(s) / aesthetic(s) as 4-6 years ago or is the process a source of constant renewal for you?
if you changed your process of relating (not having “a firm grasp” after all), what changed?
did you have to throw away any aesthetic that looked good enough 4⁄6 years ago, but turned out insufficient with hindsight today?
if yes, have you added any tricks to your toolbox for finding aesthetics that would improve upon the process?
(since you don’t relate to the metaphor of slipping fingers, that suggests you don’t see any obvious mistakes in your previous approach, that you still endorse it on reflection without major “bug fixes”)
Not asking that question, so ignoring the first part.
Going all the way anti-zen is an option too and I’m glad that the approach worked for you. For me having a goal / meaning / “fundamental want” are all in the same bag of things I’m investigating how other people found their bags, not separate items to pick one at a time 👉👈.
oh I see … yeah, the approach sounds practical enough to be worth doing empirical experiments like these—IMHO already happening along similar lines, more suitable for B being an LLM after pre-training and before RL, not B being already deceptive from some non-LLM breakthrough or after RL
not an expert in alignment proposal critique, but this seems to rhyme with all other proposals in the scalable oversight family (both the cons and the pros) with one extra big alignment tax that the bigger model would be kept on the sidelines to align the weaker model (which is the other way round from commercial interests)...
I think maybe “Meaningmaking” is fiddling around with your goals and high level strategies until you feel like you have a firm grasp on how to interact with them
I was sent here from off-site discussions about https://www.lesswrong.com/posts/gi7MDF8xceBP8YkFD/meaning-in-life-should-i-have-it-how-did-you-find-yours … any tips how you kept a firm grasp on this over the years? in this analogy, I feel my fingers slipping.
I wish I will find a way to identify with the things I do in my free time too 🫠
Is this wish compatible with not throwing away a free lunch?