Yea, but I don’t remember claiming anywhere that I can cure anybody’s depression, and don’t really intend to ever do that...?
Severin T. Seehrich
The Dunbar Playbook: A CRM system for your friends
I did not recommend any particular intervention in my post. I just tried to explain some part of my understanding of how new psycho- and social technologies are generated, and what conclusions I draw from that.
If you expect most if not all established therapeutic interventions to not survive the replication crisis—what would you consider sufficient evidence for using or suggesting a certain intervention?
For example, a friend of mine felt blue today and I sent them a video of an animated dancing seal without extensively googling for meta-analyses on the effect of cute seal videos on peoples’ moods beforehand. Would you say I had sufficient evidence to assume that doing so is better than not doing so? Or did I commit epistemic sin in making that decision? This is an honest question, because I don’t yet get your point.
Agreed. But sitting around and sulking is a bummer, so I rather keep learning, exploring, and sometimes finding things that work for me.
So, in other words—I am wrong, hippies are wrong, and most if not all therapies that look so far like they are backed by evidence are likely wrong, too.
Who or what do you suggest we turn to for fixing our stuff?
Yep, added a reference to survivorship bias to the text. Thanks.
Well, there goes that bit of overconfidence. Thanks.
AISafety.info “How can I help?” FAQ
Sequence opener: Jordan Harbinger’s 6 minute networking
Advice for newly busy people
Agreed—I added the 7th point to the list now to account for this.
Advice for interacting with busy people
Response on the EA Forum.
EA might systematically generate a scarcity mindset that produces low-integrity actors
Community building: Lessons from ten years of facilitation experience
Thanks for adding clarity! What does “support” mean, in this context? What’s the key factors that prevent the probabilities from being >90%?
If the key bottleneck is someone to spearhead this as a full-time position and you’d willingly redirect existing capacity to advise/support them, I might be able to help find someone as well.
It’s not the same thing; the link was broken because Slack links expire after a month. Fixed for now.
Flagged the broken link to the team. I found this, which may or may not be the same project: https://www.safeailondon.org/
I’m not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.
Let me know if that’s at all helpful.
I actually told the most hippie human on my list (spending months on rainbow gatherings-level hippie) that she’s on it. To my surprise, she felt unambiguously flattered. Seems like the people who know me trust that I can be intentional without being objectifying. :)