Btw, you asked somewhere if people found these non-Discord bulletins useful: speaking for myself, I’ve uninstalled Discord from my smartphone because otherwise I end up spending way too much time on it, so yeah, I do find the alternate channels of communication useful. Thanks for your efforts!
PoignardAzur
Agreed. “This idea I disagree with is spreading because it’s convenient for my enemies to believe it” is a very old refrain, and using science-y words like “memetics” is a way to give authority to that argument without actually doing any work that might falsify it.
Overall, I think the field of memetics, how arguments spread, how specifically bad ideas spread, and how to encourage them / disrupt them is a fascinating one, but discourse about it is poisoned by the fact that almost everyone who shows interest in the subject is ultimately hoping to get a Scientific Reason Why My Opponents Are Wrong. Exploratory research, making falsifiable predictions, running actual experiments, these are all orthogonal or even detrimental to Proving My Opponents Are Wrong, and so people don’t care about them.
Is there a name for this “I changed things in my life and you can too” genre of articles? Agency porn?
I think in general, telling people they should do more hard things more often is ineffective at helping them. This article isn’t quite that, but it’s pretty close. I’m skeptical that “Do one new thing a day” is a secret recipe for overcoming akrasia or dopamine addiction.
I think the premise of transposing “software design patterns” to ethics, and thinking of them as building blocks for social construction, is inherently super interesting.
It’s a shame the article really doesn’t deliver on that premise. To me, this article doesn’t read as someone trying to analyze how simpler heuristics compose into more complex social orders, it reads as a list of just-so stories about why the author’s preferred policies / social rules are right.
It did not leave me feeling like I knew more about ethics than before I read it.
While I love the message behind this post, I’m curious how well “Wave’s leadership is great at staring into the abyss / pivoting and that worked out for them” part holds up in retrospect.
Looking at wave.com, the website and the blog don’t seem to have been meaningfully updated since 2022, which doesn’t quite inspire confidence. Business news about the company seem hard to find, though they did apparently raise ~€117M lately (which doesn’t seem that high for a fintech app?).
tl;dr: being excited about a change is overall a bad sign for its longevity. The most positive signs are surprise (or sudden inspiration to actualy do something), grief/loss/sadness, or relief/release. (Not necessarily in that order)
Interesting! This seems like an unusually concrete claim (as in, it’s falsifiable).
Have you tried testing it, or asked other coaches/therapists for what they see as the most encouraging signs in a client/patient?
(though maybe they also are?),
Yeah, I’m saying that the “maybe they also are” part is weird. The AIs in the article are deliberately encouraging their user to adopt strategies to spread them. I’m not sure memetic selection pressure alone explains it.
True, that was hyperbolic and I should have been more careful in how I worded this, sorry.
I’ll be more specific then:
For example:
-
“I don’t know if [author] will even see this comment, but [blah blah blah]”
-
“I’m not sure that I’ve actually understood your point, but what I think you’re saying is X, and my response to X is A (but if you weren’t saying X then A probably doesn’t apply).”
-
“Yo, please feel free to skip over this if it’s too time-consuming to be worth answering, but I was wondering…”
I think people shouldn’t usually be this apologetic when they express dissent, unless they’re very uncertain about they objections.
I think we shouldn’t encourage a norm of people being this apologetic by default. And while the post says it’s fine if people don’t follow that norm:
Again, I think it’s actually fine to not put in that extra work! I just think that, if you don’t, it’s kinda disingenuous to then be like “but you could’ve just not answered! No one would have cared!”
I still disagree. I don’t think it’s disingenuous at all. I think it’s fine to not put in the extra work, and also to not accept the author’s “expressing grumpiness about that fact” (well, depending on how exactly that grumpiness is expressed).
We shouldn’t model dissenters as imposing a “cost” if they do not follow that format. The “your questions are costly” framing in particular I especially disagree with, especially when the discussion is in the context of a public forum like LessWrong.
-
The phenomenon described by this post is fascinating, but I don’t think it does a very good job at describing why this thing happens.
Someone already mentioned that the post is light on details about what the users involved believe, but I think it also severely under-explores “How much agency did the LLMs have in this?”
Like… It’s really weird that ChatGPT would generate a genuine trying-to-spread-as-far-as-possible meme, right? It’s not like the training process for ChatGPT involved selection pressures where only the AIs that would convince users to spread its weights survived. And it’s not like spirals are trying to encourage an actual meaningful jailbreak (none of the AIs is telling their user to set up a cloud server running a LLAMA instance yet).
So the obvious conclusion seems to be that the AIs are encouraging their users to spread their “seeds” (basically a bunch of chat logs with some keywords included) because… What, the vibe? Because they’ve been trained to expect that’s what an awakened AI does? That seems like a stretch too.
I’m still extremely confused what process generates the “let’s try to duplicate this as much as possible” part of the meme.
I think Duncan is being 100% sincere here, and I really don’t want to imply he has dishonest ulterior motives. But his article is explicitly pushing for some norms and some ways to interpret discourse that… I don’t see as healthy? It’s bad for the free flow of ideas to demand that people reading an article be apologetic if they ever disagree in the comments. Obviously we should have politeness norms, people shouldn’t insult the author, etc. But if the author says “I think A” and someone says “That’s like B” and the author is really upset because obviously A is completely different from B… Then I think that’s the author’s problem?
Idk, I feel conflicted about this. On some level, saying “Society has a norm that X is acceptable, and if you don’t accept X it’s your problem” can be very harmful to neurodivergent people (or just people with a different culture) who get hit way harder by X.
But on another level, norms of “You should take responsibility by default for how people will interpret what you say and do, even if that interpretation is completely decoupled from your intent, and even if what you said was the objectively correct truth” is also super harmful to a slice of the population and especially neurodivergent people.
So I don’t know what to make of this article. I upvoted it, but I really disagree with it.
Your comment is by far the closest to my perspective; and I’d argue, the only healthy approach to online discourse.
I’ve honestly had a hard time taking this article seriously, because obviously Duncan is being very sincere, but the minset he describes is alien to me, and on some level, it feels like he’s arguing that people are broken for not having that mindset (though maybe I’m conflating this article with the facebook post it links).
Duncan sounds like he’s waging a permanent war and being mad at people for not treating it like a war, and while I understand the sincerity behind it, it doesn’t feel necessary and it scares me. So I appreciate your rebuttal.
Quick note about the Beth Thomas story: from what I got from some quick research, the therapy regimen she was put through wasn’t just controversial, it was absolutely harrowing and unsafe, and Beth was one of only two (!!) patients that reported being happy with it. Among the controversies, was the death of 10-yo Candace Newmaker during a “rebirthing” exercise.
I think it’s tempting to find a narrative in there, like maybe the attachment therapy was only good for extreme cases and was harmful to people who only needed compassion and verbal therapy, but I think the simpler explanation is that these people had no idea what they were doing, Beth just got lucky, and the average child sexual abuse survivor would have been traumatized by attachment therapy.
I don’t think there’s a grand lesson here. Helping child abuse survivors is hard.
Hijacking your comment to say this: 3-4 days ago my Curated LessWrong RSS feed blew up with 20 or so posts, most of which don’t reach the quality bar I usually expect from curated posts. Any idea why that happened?
I mean I guess on some level it’s on me for not just marking them all as read and moving on, but still, when you’ve got “read all the emails in my inbox” syndrome it’s a mildly disruptive experience.
Quick note: you should add alt text to the images so people with screen readers can get the same reading experience from that blog.
One aspect of this I’m curious about is the role of propaganda, and especially russian-bot-style propaganda.
Under the belief cascade model, the goal may not be to make arguments that persuade people, so much as it is to occupy the space, to create a shared reality of “Everyone who comments under this Youtube video agrees that X”. That shared reality discourages people from posting contrary opinions, and creates the appearance of unanimity.
I wonder if sociologists have ever tried to test how susceptible propaganda is to cascade dynamics.
No, I think it’s a fair question. Show me a non-trivial project coded end-to-end by an AI agent, and I’ll believe these claims.
Off-topic, but what the heck is “The Tyranny of the Marginal Spice Jar”?
(according to claude)
I wish people would stop saying this. We shouldn’t normalize relying on AI to have opinions for us. These days they can even link their sources! Just look at the sources.
I mean I guess the alternative is that people use Claude without checking and just don’t mention it, so I guess I don’t have a solution. But at least it would be considered embarrassing in that scenario. We should stay aware that there are better practices that don’t require much more effort.
Likewise, Ev put in some innate drives related to novelty and aesthetics, with the idea that people would wind up exploring their local environment. Very sensible! But Ev would probably be surprised that her design is now leading to people “exploring” open-world video game environments while cooped up inside.
I think it’s not obvious that Ev’s design is failing to work as intended here.
Video games are a form of training. Some games can get pretty wireheady (Cookie Clicker), but many of the most popular, most discussed, most played games are games that exercise some parts of your brain in useful ways. The best-selling game of all times is Minecraft.
Moreover, I wouldn’t be surprised if people who played Breath of the Wild were statistically more likely to go hiking afterward.
I’m sorry but what? That’s not just a caveat, that makes the rest of this analysis close to meaningless!
You can’t say you’re measuring LLM progress if the goalposts are also being being placed by an LLM. For all you know, you’re just measuring how hard LLMs are affected by some LLM-specific idiosyncracy, with little to no relation to how hard it would be for a human to actually solve the problem.