Thanks! I wonder if there’d be legal issues because Kocherga is not a non-profit (non-profits in Russia can be politically complicated, as I’ve heard). But it’s defnitely worth trying.
One more thing: unlike the other stuff, I feel like developing EA movement in Russia is more talent-constrained: it could be much more active if we had one enthusiastic person with managerial skills and ~10 hours/week on their hands. I’m not sure we have such a person in our community—maybe we do, maybe we don’t.(Sometimes I consider taking on this role myself, but right now that’s impossible, since I’m juggling 3 or 4 different roles already.)
OTOH, I’m also not sure how much better things would be if we had more funding and could hire such people directly. I might significantly underestimate this course of action because I don’t have much experience yet with extending organizational capacity through hiring.
We’ve tried to start a local EA movement early on and had a few meetups in 2016. Introductory talks got stale quite quickly, so we put together a core EA team, with Trello board and everything.
It wasn’t very clear what we were supposed to do, though:
We wanted to translate EA Handbook (and translated some parts of it), but there were some arguments against this (similar to this post which was released later).
Those of us who believed that AI Safety is the one true cause mostly wanted to study math/CS, discuss utilitarianism issues and eventually relocate to work for MIRI or something.
Some others argued that you shouldn’t be a hardcore rationalist to do the meaningful job and also maybe we should focus on local causes or at least not to discourage this.
Earning to give (which I feel had more emphasis in EA 3 years ago than it has now) isn’t very appealing in Russia, since the average income here is much lower than in the US
So, we had ~5-6 people on the team and were doing fine for a while, but eventually it all fizzled out due to the lack of time, shared vision and organizational capacity.
We tested several approaches to reboot it a few times since then. Haven’t succeeded yet, but we’ll try again.
Currently, EA movement in Russia is mostly promoted by Alexey Ivanov from Saint-Petersburg. He takes care of online resources and organizes introductory EA talks and AI Safety meetups. He’s doing a great work.
Another guy is working on a cool project to promote EA/rationality among the talented students, but that project is still in its early stages and I feel like it’s not my story to tell.
I’ve applied to CFAR’s workshop in Prague myself (and asked for financial aid, of course); they haven’t contacted me yet.
I’ll explain about EA in reply to this comment.
Thanks! I’m planning to write a separate post with more details on our community, activities and accumulated experiences; there’s much more stuff I’d like to share which didn’t fit in this one. It might take a few weeks, though, since my English writing is quite sluggish.
Yes, it’d be interesting to compare our experiences.
If you want to chat in a lower-latency channel, I’m @berekuk on Lesswrongers Slack (my preferred medium for chatting) or https://www.facebook.com/berekuk if you dislike Slack for some reason.
Well, we actually had various versions of a “discuss and challenge your beliefs” exercise for a long time. (Previous names: “Belief Investigation” and “Structuring”.)
Here’s how it goes: split participants into pairs, ask one person in each pair to declare any of their beliefs that they want to investigate (compare: reddit.com/r/changemyview) and then allow them to discuss it for a predetermined period of time with their partner.
We used this kind of activity on LW meetups a lot, because it’s easy to organize, can give you valuable updates and can be repeated for pretty much unlimited number of times without losing value.
Then last year two people from the community who were interested in Street Epistemology proposed to run SE as a regular meetup, expanding on these discussions a lot more and turning it into an actual craft. You can find plenty of information about SE on its website (check out The Complete SE Guide), but basically it’s a set of best practices for how to investigate a belief in a dialogue.
SE seems very aligned with LW values. They talk a lot about “doxastic openness” (being open to revising your own beliefs), probabilities (“On a scale from zero to one hundred, how confident are you that your belief is true?“), etc. People at Kocherga meetups also often incorporate Double Crux technique in these discussions.
SE’s traditional discussion topics usually include religion and pseudo-science (although you can take anything as a topic), and they refer to logical fallacies more often than LW, so they are conceptually related to the classical skeptics and critical thinking communities. Which means SE is often more approachable than LW and Sequences, and SE meetups are currently our largest event, drawing ~20 visitors consistently every week.
So, what happened?
This post is hidden from Main and the survey “is expired and no longer available”, even though the post mentions that it should run for 10 more days. I wanted to share it with Russian LW community, will it be back in some form later?
We expanded a lot since we opened our own rationality-aligned time club Kocherga in September 2015.
General LW meetups every 3 weeks on Sundays with talks, discussions and games
“Rationality for beginners” lectures every 3 weeks on Sundays
(the third Sunday slot is reserved for EA meetups)
Dojos on Fridays
Sequences reading group started two weeks ago on Mondays
Rationality-related games once a month
CFAR-style weekend workshops (we ran 4 of these in 2016)
I really should write a separate post about all that’s happened since 2013 when the last report from our group was posted.
For the Russian LessWrong slack chat we agreed on the following emoji semantics:
:+1: means “I want to see more messages like this”
:-1: means “I want to see less messages like this”
:plus: means “I agree with a position expressed here”
:minus: means “I disagree”
:same: means “it’s the same for me” and is used for impressions, subjective experiences and preferences, but without approval connotations
:delta: means “I have changed my mind/updated”
We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.
It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.