Chris Lakin
I want to believe this but I feel like I’ve vaguely heard stories about people discarding load-bearing copes and suffering for it, e.g. via meditation
I like this post a lot and I’m glad you wrote it, if for my own understanding. I also appreciate how it engages with Chesterton’s Fence and suggests e.g. “For instance, the closeted homophobe should probably move out of his homophobic social context if he can.”
That said, I wonder this post is an infohazard for people immersed in sufficiently strong social incentives. I know you acknowledge this, but still.. Have you tested these tools with people in very difficult socially contexts?
thank you for saying this
glad to see this written up!
Just wondering, have you seen any evidence of cluster headaches as memetic viruses?
I think it’s accurate to say that people “choose their own self-fulfilling prophecies/identities”… but what makes some self-fulfilling prophecies preferable over others?
I think it’s actually another control process—specifically, the process of controlling our identities. We have certain conceptions of ourselves (“I’m a good person” or “I’m successful” or “people love me”.) We then are constantly adjusting our lives and actions in order to maintain those identities—e.g. by selecting the goals and plans which are most consistent with them, and looking away from evidence that might falsify our identities.
Something about this feels weird to me… where do identities come from, then?
Avoid doomerism. Here, I mean “doomerism” not just in the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way.
https://www.darioamodei.com/essay/the-adolescence-of-technology
whoa thanks, I’ve had the same problem. How would you want to extend/improve this app?
thank you
thank you
All of this for the funding of 2-3 OpenAI employee salaries, wow
Thank you for posting about Adler on LessWrong!
Related: Rewriting The Courage to be Disliked
could you share the link to the bear fat brand?
https://x.com/g_leech_/status/1984587261120233577
Anthropic, Dec 2024 vs May 2025
[vs]
https://x.com/RyanPGreenblatt/status/1983945951342739812
Strong agree.
Would add: This can be prevented with skilled supervision.
Aaron Silverbrook is going for it:
Aaron Silverbook, $5K, for approximately five thousand novels about AI going well. This one requires some background: critics claim that since AI absorbs text as training data and then predicts its completion, talking about dangerous AI too much might “hyperstition” it into existence. Along with the rest of the AI Futures Project, I wrote a skeptical blog post, which ended by asking—if this were true, it would be great, right? You could just write a few thousand books about AI behaving well, and alignment would be solved! At the time, I thought I was joking. Enter Aaron, who you may remember from his previous adventures in mad dental science. He and a cofounder have been working on an “AI fiction publishing house” that considers itself state-of-the-art in producing slightly-less-sloplike AI slop than usual. They offered to literally produce several thousand book-length stories about AI behaving well and ushering in utopia, on the off chance that this helps. Our grant will pay for compute. We’re still working on how to get this included in training corpuses. He would appreciate any plot ideas you could give him to use as prompts.
(tweet)
I’m glad that thinking about incentives/teleology is getting more popular!