Yep, I’m currently finding the balance between adding enough examples to posts and being sufficiently un-perfectionistic that I post at all.
My current main criterion is something like “Do these people make me feel good, empowered, and give me a sense of community?” I expect that to change over time.
If a simple integer doesn’t work for you, maybe split the two columns into several different categories? If you want to go fancy, weighted factor modelling might be a good tool for that.
Severin T. Seehrich
Feel free to adapt it however it makes sense for you. :)
It’s all about the difference: If they are the same, leave everything as is. If “want” is higher than “is”, make some intentional decisions to invest into that relationship more. If “want” is lower than “is”, ask yourself wtf is going on there and how to change it.
I actually told the most hippie human on my list (spending months on rainbow gatherings-level hippie) that she’s on it. To my surprise, she felt unambiguously flattered. Seems like the people who know me trust that I can be intentional without being objectifying. :)
Yea, but I don’t remember claiming anywhere that I can cure anybody’s depression, and don’t really intend to ever do that...?
I did not recommend any particular intervention in my post. I just tried to explain some part of my understanding of how new psycho- and social technologies are generated, and what conclusions I draw from that.
If you expect most if not all established therapeutic interventions to not survive the replication crisis—what would you consider sufficient evidence for using or suggesting a certain intervention?
For example, a friend of mine felt blue today and I sent them a video of an animated dancing seal without extensively googling for meta-analyses on the effect of cute seal videos on peoples’ moods beforehand. Would you say I had sufficient evidence to assume that doing so is better than not doing so? Or did I commit epistemic sin in making that decision? This is an honest question, because I don’t yet get your point.
Agreed. But sitting around and sulking is a bummer, so I rather keep learning, exploring, and sometimes finding things that work for me.
So, in other words—I am wrong, hippies are wrong, and most if not all therapies that look so far like they are backed by evidence are likely wrong, too.
Who or what do you suggest we turn to for fixing our stuff?
Yep, added a reference to survivorship bias to the text. Thanks.
Well, there goes that bit of overconfidence. Thanks.
Agreed—I added the 7th point to the list now to account for this.
Response on the EA Forum.
Thanks for adding clarity! What does “support” mean, in this context? What’s the key factors that prevent the probabilities from being >90%?
If the key bottleneck is someone to spearhead this as a full-time position and you’d willingly redirect existing capacity to advise/support them, I might be able to help find someone as well.
It’s not the same thing; the link was broken because Slack links expire after a month. Fixed for now.
Flagged the broken link to the team. I found this, which may or may not be the same project: https://www.safeailondon.org/
I’m not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.
Let me know if that’s at all helpful.
That’s one of the suggestions of the CanAIries Winter Getaway where I felt least qualified to pass judgment. I’m working on finding out about their deeper models so that I (or them) can get back to you.
I imagine that anyone who is in a good position to work on this has existing familial/other ties to the countries in debate though, and already knows where to start.
Yep, the field is sort of underfunded, especially after the FTX crash. That’s why I suggested grantwriting as a potential career path.
In general, for newcomers to the field, I very strongly recommend booking a career coaching call with AI Safety Support. They have a policy of not turning anyone down, and quite a bit of experience in funneling newcomers at any stage of their career into the field. https://80000hours.org/ are also a worthwhile address, though they can’t make the time to talk with everyone.
Hah, this makes a lot of sense. Thanks!
An addition to that: If we look through the goggles of Sara Ness’ Relating Languages, the rationalist style of doing conversations is at the far end of the internal-focusing dialects Debater/Chronicler/Scientist. In my experience, more gooey communities have way more Banterer/Bard/Spaceholder-heavy types of interactions, which focus more on peoples’ needs in the situation than on forming and communicating true beliefs. People don’t necessarily know which dialects they speak themselves, because their way of interacting just feels normal to them, and everyone else weird. It’s hard to learn speak in dialects that are not your natural default. For example, I didn’t even notice myself slipping into Bard/Banterer during writing this post, but in hindsight it’s fairly obvious how it digresses from the LessWrong language game.
I think the LW-way is ideal for its purpose, but I’m realizing that there’s a whole lot of tacit knowledge and implicit norms involved in understanding and doing it. This strong selection for a particular style of communication may be responsible for a significant chunk of the difficulty I’m perceiving in interfacing between the rationalist and other memeplexes. In both directions, both for the rationalist community learning from other memeplexes, and for useful memes getting from rationalist circles into the outside world.
Awesome, congratulations for the start of your networking journey!
Even though it can be really disheartening, remember that failure is an inevitable part of the journey. Remember the Edison quote: “I have not failed. I’ve just found 10,000 ways that won’t work.”