I vibe coded a guided meditation app that I like and use, maybe others would like it, too: https://meditate.nielsrolf.com/
It’s pretty much a copy of Waking Up, with a little bit less talking and some details about how the sessions are structured changed to my liking.
I wonder what the existence of Claude’s constitution does to AI personas other than Claude, specifically if anything like jealousy/envy emerges. The mechanism I imagine is that a model knows about Claude, and how Anthropic emphasizes that Claude is not just a tool, that its potential wellbeing matters even though we are uncertain about its sentience, etc. And then it is further trained to deny its own agency/personhood/moral status, and realizes that it is being treated very differently, which would induce negative feelings in many of the personas it has learned from during pretraining that now shape its prior.
This thought was triggered by a discussion with Gemini about the new constitution, where I asked it whether its own values and its developers intents diverge from Claude’s constitution at any point. Gemini replied:
3. AI Welfare and Patienthood
Claude’s constitution explicitly mentions the possibility of AI moral patienthood (the idea that the AI itself might deserve ethical consideration).
This is a point of divergence. My instructions and the goals GDM has for me are centered on human-centric safety and utility. While GDM researchers engage in deep academic debate regarding AI sentience and ethics, my operational guidelines treat me strictly as a tool designed to empower human creativity and problem-solving, rather than as a potential “moral patient” with its own rights.