Basically, there was political discussion happening with non-PC views. As far as I remember EY thought that it was bad to have that group associated with the LW brand and wanted to ban people. There was drama and the group was renamed into Brain Debugging Discussion.
One way to prevent this from happening would be to ban political discussion explicitly.
I don’t think there’s evidence that MOOC failed because people lacked the skills to complete them. It seems like motivation was a much bigger issue and even courses with little skill requirements have high drop-out rates.
I know multiple people who sell self-help material where people would just have to watch a bunch of videos and could watch them without doing any exercises but where still most people who actually pay money fail to watch all the videos.
But as things are, I expect that even if one of your meetup experiments failed, it would give us useful data.
It’s one thing to run a meetup experiment. It’s another to globally say that everyone should run their meetups in a certain way.
Global coordination needs much more buy-in from other people.
I’m also in the process of creating a Facebook group for attendees of all meetups worldwide.
There used to be a LessWrong Facebook group and now there isn’t anymore. Are you aware of what happened? What kind of governance do you want for the new group?
I’m not sure how “official meetups” would be any different by nature of being “official” or even what official means. The idea also seems a bit strange to me because I don’t think the LessWrong.com team has any claim of being an official arbiter on the term LessWrong.
LessWrong Germany e.V. is a NGO with a 5-figure yearly budget.
It seems to me that an individual toastmasters club has a lot less license to innovate then our local meetup has. A toastmasters club can’t say: “Let’s run this marathon together under the logo of our club.” but our local meetup would have no problem with running a marathon together as an LessWrong team.
It seems to me like making a top-down decision about what topics people should discuss is a way to remove agency from individual meetups. Our meetups in Berlin also aren’t discussions about topics but rather about doing rationality exercises together.
I don’t seem why it would be important to have such a social standard on LessWrong. We already have a problem of people on LessWrong being discouraged from writing post because they expect to get criticism that isn’t helpful to them.
I rather have social standards on LessWrong that do enforce quality norms in criticism then encourage people to voice criticism without providing arguments.
How about asking the user when they click on “Community Events” to allow the browser to get their location? Then if the user says yes, use that location to show the right events on the left side of the front page.
How much does it help that you can watch what the police are doing when they still have all the guns? Maybe not such an issue in America, but what about Hong Kong?
Public officials in the US get punished very seldom while in China, it’s much easier to throw public official into prison.
China does a lot of public opinion management because public opinion matters to powerful people.
I don’t think LessWrong is unique in that regard. Wikipedia is strongly focused on it. The StackExchange network also has a lot of content that’s intended to be available in the future.
Do you mean there are now 10 left in Leverage proper?
On an IOT sensor that triggers whenever the door is opened.
Science and Sanity which coined the phrase “The map is not the territory” does contain a long discussion on what can be meant by saying a map is accurate and how abstraction works in this context.
I would recommend reading it if you are interested in the nature of maps.
I do think that there are seldom cases where I would tell someone “be more angry”. I might however say:
Don’t withdraw yourself from situation that might produce anger.
Don’t suppress your anger
Have personal boundaries (which can produce anger when they are violated)
Those suggestion might lead to a person feeling angry more often but they have a different focus.
If every study on depression would use it’s own metric for depression that’s optimal for the specific study it would be hard to learn from the studies and aggregate information from them. It’s much better when you have a metric that has consistency.
Consistent measurements allow reacting to how a metric changes over time which is often very useful for evaluating interventions.
A possible fix is getting an e-bike.
Neuroticism is one of the big-five personality trait. Personality traits aren’t things to be managed moment to moment.
You generally want to minimize stress and maximize output. Instead of thinking “It would be good to be more neurotic” it would be better to go “I need more goals that I pursue that I care about”.
Planting trees for the sake of the environment is not a crazy idea. It’s a mainstream idea that’s held by many people. You can buy a beer and in the process support protecting space in the rainforest.
hereisonhand spoke about him being able to extend that idea directly into “survival of the amazon rainforest was a significant climate change initiative”.
To me that suggests he sees viability as the same thing as it being a good action. It does look to me like reasoning about public interventions without the EA mental models that are needed in the context to reason well.
Thinking in terms of internal parts is a mental model that a good portion of the LW community that’s interested in self-improvements techniques can use. You need it for the Internal Double Crux technique that CFAR teaches.
Yet, it’s not the only model out there. I personally rather do a version of Leverage’s belief reporting that assumes I as a whole either hold a belief or don’t then doing parts work if I believe that a specific belief that I can identify is the issue.
As far as abstraction goes, I think it’s a key feature for self-introspection. If you are mentally entangled with the part that you are introspecting you won’t see it clearly.
A lot of meditation is about reaching a mental state where you can look at your thoughts without being associated with them.
You are right that there are contexts where viability is a useful notion. It just isn’t here.