I’m interested to find out what worked for you, but I suspect that the root cause of failure in most cases is lacking enough motivation to converge. It takes two to tango, and without a shared purpose that feels more important than losing face, there isn’t enough incentive to overcome epistemic complacency.
That being said, better software and social norms for arguing could significantly reduce the motivation threshold.
Aside from what’s already here, I can think of a few “character profiles” of fields that would benefit from LessWrong infrastructure:
Hard fields that are in decent epistemic health but could benefit from outsiders and cross pollination with our memeplex (e.g. economics).
Object level things where outside experts can perform the skill but the current epistemological foundations are so shaky that procedural instructions work poorly (e.g. home cooking).
Things that are very useful where good information exists but finding it requires navigating a lemon market (e.g. personal finance).
Fields that have come up regularly as inputs into grand innovations that required knowledge from multiple areas (e.g. anything Elon needed to start his companies)
I don’t think the bottleneck is lack of recruitment though, the problem is that content has no place to go. As you rightly point out, things that aren’t interesting to the general LW audience get crickets. I have unusual things I really want to show on LessWrong that are on their 5th rewrite because I have to cross so many inferential gaps and somehow make stuff LW doesn’t care about appealing enough to stay on the front page.
The somewhat cynical take is that open attendance events (meetup.com and LW) are like group projects where organizers are competing for attendees. This makes organizing events a servant role rather than a leadership role, meaning that if you expend the resources to put on an interesting talk and offer free pizza people will think they’ve done their bit by showing up and adding entropy. Like the way people balk at paying for software now that Google et all have figured out that it’s more efficient to take it out of your back pocket via advertising, people treat meetups the same way because organizers have zero leverage when attendees can go to some other meetup with free pizza because it’s a recruitment funnel for a tech company.
Fixing this will require more than words alone. Informing attendees that the meetup is a “take it seriously” meetup does not cause them to take it seriously because there’s no way at present to give those words credibility.
(Unrelated: I stumbled on this post by happenstance only to see a comment I made form a key part of it. This seems exactly like the sort of thing that should go in a user’s notifications)
As someone who has organised meetups outside of the main hubs my experience matches pretty much everything said here. The current format is not ideal for accomplishing anything, so much so that I’ve stepped down from organising mine because they were providing so little value. It’s a sad state of affairs, but from what I can tell the majority are content with them being low-effort social groups.
In terms of coordinating between regional hubs I would suggest opting for LessWrong instead of Facebook. Many people simply won’t see the content due to either algorithms or newsfeed blockers plus Facebook no longer maintains the monopoly over everyone’s social calendar that it had just 2 years ago.
Focusing on video quality instead of talking to a webcam is a differentiator, so that should raise your odds of success.
If someone specifically asks for criticism and I have something to say, I like to treat them like an adult instead of assuming they’re just repeating tribal shibboleths. This also has a bonus of punishing people who are insincere about wanting critisism while rewarding those who honestly seek it.
While it’s possible to gain useful skills from a failed project, opportunity costs are real. I don’t think people should be risk averse (quite the opposite), but I do think people should put a bit of thought into a viable strategy before commiting the time needed to determine if a project will succeed.
Yes, I’m aware that my comment resembles the snark you get on Hacker News, but there is a distinction: I’m saying “There’s a pile of skulls on this mountain, if you are going to climb it, figure out how to avoid making the same mistakes”
Critical question : If you’ve done some cursory research you’ll know that you aren’t the first person to think of this. There have been somewhere between 10-100 channels started that focused on the Sequences, with only a couple achieving minor success (e.g. Julia Galef’s channel). Given this reality, what do you plan to do differently so this doesn’t end up as a waste of time?
The fact that such debates can go on for 500 pages without significant updates from either side point towards a failure to 1) systematically determine which arguments are strong and which ones are distractions 2) restrict the scope of the debate so opponents have to engage directly rather than shift to more comfortable ground.
There are also many simpler topics that could have meaningful progress made on them with current debating technology, but they just don’t happen because most people have an aversion to debating.
I see how the idea is sensible for some, but I’ve never felt satisfied with compartmentalised friendships where I share a small facet of myself with each group.
In addition to diversification being somewhat alienating, there are some benefits of tight-knit groups you’d struggle to replicate in diversified social portfolio:
Lowered social transaction costs—when you divide your social time between fewer people you have more time to learn how best to work with each person
Easier trust coordination—repeated interactions over a long period of time mean you have a lot of past data to evaluate someone’s trustworthiness
Emotional investment—loyalty is rational when each person isn’t a replaceable commodity. Having tough conversations that will cause friction but pay off in the long run is worth it if there’s actually going to be a long run.
Meta beliefs about jargon: There are some benefits to using a new word free of existing connotations, but costs often exceed the benefits. In the first stage only a few insiders know what it means. In the second stage you can use it with most of the community, but you need to translate it for casual members and a general audience. In the third stage the meaning becomes diluted as the community starts using it for everything, so you’re basically back where you started.
In addition to the tendency for jargon to be diluted in general, jargon that’s shorthand for “I see pattern X and that has very important implications” will be very powerful, so it’s almost certain to be misused unless there are real costs (i.e. social punishments) for doing so. A better method may be to use existing phrases that are more linguistically stable.
Some draft proposals:
Carl is engaging in motivated cognition → Carl has a conflict of interest/Carl is deceiving himself/Carl is quite attached to this belief (depending on which one is applicable)
Carl is wrong about something and it’s influencing others → Carl is a bad influence
Everyone in the community is saying X → Our community has a systemic bias regarding idea X
Alice is “blatantly” wrong about X → Alice has substantial disagreements with us about X
Most of these proposals sound quite confrontational, but that’s inherent to what’s being communicated. You can’t use jargon for “Alice is saying dangerous things” within earshot of Alice and avoid social repercussions if the meaning is common knowledge.
I generally prefer norms that look like sparring—anything that’s relevant is fair game, anything on the boundary of personal attack is fair game so long as you can make the case for its relevance.
Personal preferences aside, the biggest norm problem I’ve encountered is when people make an assertion based on priors that are taboo to discuss but you can’t make a solid counterargument without addressing them.
This post relies on several assumptions that I believe are false:
1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.
2. The base rate of being “stupid and bad” by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)
3. When people say they are worried about being “wrong” and therefore “stupid” and “bad”, they are referring to things with standard definitions that are precise enough to do math with.
4. The individuals you’re attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.
5. Not being seen as “stupid and bad” in this community is sufficient for someone to get the things they want/avoid the things they don’t want.
6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you’re at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)
Just finished the book today, I’m somewhat impressed by how it came out given the suspicion many people had.
The author managed to take the AI arguments seriously while also striking a balance between writing an honest account of his interactions with the community, keeping it interesting for the typical reader and avoiding lazy potshots against nerds.
My only wish was that there was a section on the practical aspect to rationality, but was widely neglected by many of the hardcore fans, so it’s hardly a fair critique of a book about AI safety.
The amounts are disputed, due to damages resulting from Greg’s personal negligence, and if all points in our counterclaim for damages hold water, you would actually be owing thousands to us. After amounts were disputed, you rebuffed all claims as trivial and gave us 36 hours to pay up or else, since then you have taken this to every platform you could find, including contacting one person’s startup team members and potential seed accelerators or another person’s immediate family in attempt to pressure them into compliance.
With regards to the vision, please don’t pretend to mourn something you actively opposed during the nine months you shared a house with us.
I like this post, and would like to see more posts like this.
Did you discover why Order of the Sphex failed?
I agree with the idea that civility norms as they are currently implemented are never neutral, but not that it is humanly impossible.
Incisive questioning of a locally unpopular view is called “being insightful”; the proponent of a locally unpopular view being triggered by it is called “letting your emotions run away with you in a rational discussion” and “blowing up at someone for no reason.” Incisive questioning of a locally popular view is called “uncharitable” and “incredibly rude”; the proponent of a locally popular view being triggered by it is called “a reasonable response to someone else being a jerk.” It all depends on whether the people doing the enforcement find it easier to put themselves in the shoes of the upset person or the person doing the questioning.
It does, if the enforcers see themselves as adjudicators of good taste rather than the people who execute the rules other people have agreed on. I suppose this is one of the few situations where not questioning authority would actually be beneficial.
It’s also worth stating that if you want more than just the pretense of civil discourse, a person who retaliates against a harsh but true critisism of their idea has to be reprimanded, not in spite of but because the audence is sympathetic to their emotional reaction.
Conversely, Great-Aunt Bertha skipped school in the fifties to go get drunk with sailors and was the first woman in the Hell’s Angels. Great-Aunt Bertha thinks it is very rude that Great-Aunt Gertrude keeps saying “a-HEM” five times a sentence just because she’s talking the way she normally talks. It’s not polite to interrupt what people are saying by getting offended and storming out. And that whole “sir” and “ma’am” business is actually offensive. Children are people and it is wrong to treat them as if they are subservient to adults.
Great-Aunt Bertha and Great-Aunt Gertrude will have some difficulty agreeing about what is polite behavior at the Thanksgiving table.
I’m not particularly sure if this is true of your tyical Aunt Bertha, but it is my experience that everyone, including the more Bertha-ish types such as myself, agree that politeness means something approximating Aunt Gertrude. The counterpoint is not that politeness is completely subjective but at what point along the continuum between blunt honesty and hyper-politeness is best in a given situation.
This isn’t the same for respect, as that is an internal reaction, rather than a consensus based social norm. Many hacker-types will only take the time out of their day to poke holes in an idea if it at least has some parts that are worth saving. This makes critisism a mark of respect in those subcultures, in opposition to almost everywhere else.
On the other hand, many aspects of etiquette have nothing to do with being nice to people but instead are ways of signalling that one is upper-class, or at least a middle-class person with pretensions of same. (Most obviously, anything about what forks one uses; more controversially, rules about greetings, introductions, when to bring gifts, etc.) You wind up excluding poor and less educated people, which people in many spaces don’t want.
I’d like to use this to register an informal complaint that the norms in the rationalist community, including the ones on discourse contain a large proportion of things that suit the aesthetic sensibilities of WASPy middle class intellectuals rather than what’s instrumentally rational for acheiving most of our stated goals.
a combination of turnkey systems eg. wiki, docs, spreadsheets during development, we will likely also be using this preregistration database when it is a bit more polished and we have experiements suited for it
edit: whoops, thought you were AndHisHorse, although they are also welcome to contact me if interested in craft rationality
My husband works for Google and AFAICT their policy is “show up on time for important meetings, get your work done, otherwise we don’t care.”
I am already aware of this, and I’m not sure why it appears as if I’m unaware of how things work at companies like Google? Given the distinction between categories I highlighted in the above comment:
There is a big difference between an employee who works semi-irregular hours and misses irrelevant meetings and one that goes completely off the grid without any warning when they are being relied upon to do a specific task.
Most startup employees are not PR people, and “scheduled news appearance” is a relatively small fraction of what PR people do.
I chose an infrequent but very clearcut scenario in order to function as a good example of someone being relied upon and dropping the ball. Pointing out that it is rare is fighting the hypothetical, like saying you wouldn’t pull the lever in a trolley problem because it might get you arrested.
If you find this hypothetical unsuitable, perhaps one of the following would work better:
The head programmer on a team taking a spur of the moment vacation the week before the next software release deadline.
The sysadmin/whoever not returning phonecalls for a few days when a software bug locks out all users from the app.
The team lead who was meant to be giving a presentation to the CEO to show the new design/whatever decides to take a long lunch and is an hour late.
The CEO who repeatedly ducks calls from his investors because he is averse to explaining why quarterly growth metrics took a nosedive.
The new hire who reads the unlimited vacation spiel and decides to take a three month vacation post-induction so he can “take time to recharge in order to become more productive” on the employer’s dime.
I’m not even saying one of these examples will get someone fired, just a repeated pattern of behaviour like this.
There is also the point that people who have these jobs know this on some level, and even if they are unreliable in social situations they do not behave like that when they don’t think they can get away with it.
The point I’m making is there are situations where reliablity definitely does matter (e.g. commuity projects/voluteer run events), and a widespread norm of people behaving like it doesn’t is greatly hindering the ability of those projects to operate.
Whether reliability matters socially is a little more open to dispute, and I’ll grant that it is reasonable to have reached different conclusions, as my attempts to suggest it does are gestures in the direction of There Are Rules Here.