I see how the idea is sensible for some, but I’ve never felt satisfied with compartmentalised friendships where I share a small facet of myself with each group.
In addition to diversification being somewhat alienating, there are some benefits of tight-knit groups you’d struggle to replicate in diversified social portfolio:
Lowered social transaction costs—when you divide your social time between fewer people you have more time to learn how best to work with each person
Easier trust coordination—repeated interactions over a long period of time mean you have a lot of past data to evaluate someone’s trustworthiness
Emotional investment—loyalty is rational when each person isn’t a replaceable commodity. Having tough conversations that will cause friction but pay off in the long run is worth it if there’s actually going to be a long run.
Meta beliefs about jargon: There are some benefits to using a new word free of existing connotations, but costs often exceed the benefits. In the first stage only a few insiders know what it means. In the second stage you can use it with most of the community, but you need to translate it for casual members and a general audience. In the third stage the meaning becomes diluted as the community starts using it for everything, so you’re basically back where you started.
In addition to the tendency for jargon to be diluted in general, jargon that’s shorthand for “I see pattern X and that has very important implications” will be very powerful, so it’s almost certain to be misused unless there are real costs (i.e. social punishments) for doing so. A better method may be to use existing phrases that are more linguistically stable.
Some draft proposals:
Carl is engaging in motivated cognition → Carl has a conflict of interest/Carl is deceiving himself/Carl is quite attached to this belief (depending on which one is applicable)
Carl is wrong about something and it’s influencing others → Carl is a bad influence
Everyone in the community is saying X → Our community has a systemic bias regarding idea X
Alice is “blatantly” wrong about X → Alice has substantial disagreements with us about X
Most of these proposals sound quite confrontational, but that’s inherent to what’s being communicated. You can’t use jargon for “Alice is saying dangerous things” within earshot of Alice and avoid social repercussions if the meaning is common knowledge.
I generally prefer norms that look like sparring—anything that’s relevant is fair game, anything on the boundary of personal attack is fair game so long as you can make the case for its relevance.
Personal preferences aside, the biggest norm problem I’ve encountered is when people make an assertion based on priors that are taboo to discuss but you can’t make a solid counterargument without addressing them.
This post relies on several assumptions that I believe are false:
1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.
2. The base rate of being “stupid and bad” by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)
3. When people say they are worried about being “wrong” and therefore “stupid” and “bad”, they are referring to things with standard definitions that are precise enough to do math with.
4. The individuals you’re attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.
5. Not being seen as “stupid and bad” in this community is sufficient for someone to get the things they want/avoid the things they don’t want.
6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you’re at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)
Just finished the book today, I’m somewhat impressed by how it came out given the suspicion many people had.
The author managed to take the AI arguments seriously while also striking a balance between writing an honest account of his interactions with the community, keeping it interesting for the typical reader and avoiding lazy potshots against nerds.
My only wish was that there was a section on the practical aspect to rationality, but was widely neglected by many of the hardcore fans, so it’s hardly a fair critique of a book about AI safety.
The amounts are disputed, due to damages resulting from Greg’s personal negligence, and if all points in our counterclaim for damages hold water, you would actually be owing thousands to us. After amounts were disputed, you rebuffed all claims as trivial and gave us 36 hours to pay up or else, since then you have taken this to every platform you could find, including contacting one person’s startup team members and potential seed accelerators or another person’s immediate family in attempt to pressure them into compliance.
With regards to the vision, please don’t pretend to mourn something you actively opposed during the nine months you shared a house with us.
I like this post, and would like to see more posts like this.
Did you discover why Order of the Sphex failed?
I agree with the idea that civility norms as they are currently implemented are never neutral, but not that it is humanly impossible.
Incisive questioning of a locally unpopular view is called “being insightful”; the proponent of a locally unpopular view being triggered by it is called “letting your emotions run away with you in a rational discussion” and “blowing up at someone for no reason.” Incisive questioning of a locally popular view is called “uncharitable” and “incredibly rude”; the proponent of a locally popular view being triggered by it is called “a reasonable response to someone else being a jerk.” It all depends on whether the people doing the enforcement find it easier to put themselves in the shoes of the upset person or the person doing the questioning.
It does, if the enforcers see themselves as adjudicators of good taste rather than the people who execute the rules other people have agreed on. I suppose this is one of the few situations where not questioning authority would actually be beneficial.
It’s also worth stating that if you want more than just the pretense of civil discourse, a person who retaliates against a harsh but true critisism of their idea has to be reprimanded, not in spite of but because the audence is sympathetic to their emotional reaction.
Conversely, Great-Aunt Bertha skipped school in the fifties to go get drunk with sailors and was the first woman in the Hell’s Angels. Great-Aunt Bertha thinks it is very rude that Great-Aunt Gertrude keeps saying “a-HEM” five times a sentence just because she’s talking the way she normally talks. It’s not polite to interrupt what people are saying by getting offended and storming out. And that whole “sir” and “ma’am” business is actually offensive. Children are people and it is wrong to treat them as if they are subservient to adults.
Great-Aunt Bertha and Great-Aunt Gertrude will have some difficulty agreeing about what is polite behavior at the Thanksgiving table.
I’m not particularly sure if this is true of your tyical Aunt Bertha, but it is my experience that everyone, including the more Bertha-ish types such as myself, agree that politeness means something approximating Aunt Gertrude. The counterpoint is not that politeness is completely subjective but at what point along the continuum between blunt honesty and hyper-politeness is best in a given situation.
This isn’t the same for respect, as that is an internal reaction, rather than a consensus based social norm. Many hacker-types will only take the time out of their day to poke holes in an idea if it at least has some parts that are worth saving. This makes critisism a mark of respect in those subcultures, in opposition to almost everywhere else.
On the other hand, many aspects of etiquette have nothing to do with being nice to people but instead are ways of signalling that one is upper-class, or at least a middle-class person with pretensions of same. (Most obviously, anything about what forks one uses; more controversially, rules about greetings, introductions, when to bring gifts, etc.) You wind up excluding poor and less educated people, which people in many spaces don’t want.
I’d like to use this to register an informal complaint that the norms in the rationalist community, including the ones on discourse contain a large proportion of things that suit the aesthetic sensibilities of WASPy middle class intellectuals rather than what’s instrumentally rational for acheiving most of our stated goals.
a combination of turnkey systems eg. wiki, docs, spreadsheets during development, we will likely also be using this preregistration database when it is a bit more polished and we have experiements suited for it
edit: whoops, thought you were AndHisHorse, although they are also welcome to contact me if interested in craft rationality
My husband works for Google and AFAICT their policy is “show up on time for important meetings, get your work done, otherwise we don’t care.”
I am already aware of this, and I’m not sure why it appears as if I’m unaware of how things work at companies like Google? Given the distinction between categories I highlighted in the above comment:
There is a big difference between an employee who works semi-irregular hours and misses irrelevant meetings and one that goes completely off the grid without any warning when they are being relied upon to do a specific task.
Most startup employees are not PR people, and “scheduled news appearance” is a relatively small fraction of what PR people do.
I chose an infrequent but very clearcut scenario in order to function as a good example of someone being relied upon and dropping the ball. Pointing out that it is rare is fighting the hypothetical, like saying you wouldn’t pull the lever in a trolley problem because it might get you arrested.
If you find this hypothetical unsuitable, perhaps one of the following would work better:
The head programmer on a team taking a spur of the moment vacation the week before the next software release deadline.
The sysadmin/whoever not returning phonecalls for a few days when a software bug locks out all users from the app.
The team lead who was meant to be giving a presentation to the CEO to show the new design/whatever decides to take a long lunch and is an hour late.
The CEO who repeatedly ducks calls from his investors because he is averse to explaining why quarterly growth metrics took a nosedive.
The new hire who reads the unlimited vacation spiel and decides to take a three month vacation post-induction so he can “take time to recharge in order to become more productive” on the employer’s dime.
I’m not even saying one of these examples will get someone fired, just a repeated pattern of behaviour like this.
There is also the point that people who have these jobs know this on some level, and even if they are unreliable in social situations they do not behave like that when they don’t think they can get away with it.
The point I’m making is there are situations where reliablity definitely does matter (e.g. commuity projects/voluteer run events), and a widespread norm of people behaving like it doesn’t is greatly hindering the ability of those projects to operate.
Whether reliability matters socially is a little more open to dispute, and I’ll grant that it is reasonable to have reached different conclusions, as my attempts to suggest it does are gestures in the direction of There Are Rules Here.
The key things here are value alignment and implicit norms, Netflix offers employees “unlimited vacation”, the quotation marks are there for a reason.
If your PR person flaked on a scheduled news appearance because they were in bed redditing, are you telling me the company wouldn’t mind because they are pro-autonomy?
In retrospect, snarkily proving I had paid enough attention to your post to incorporate some of it into my essay was not the best way to make the point. My apologies.
The reason I have not changed the article is that changing that information would require a careful splice to preserve the original feel of the passage, for no informational benefit. Here is why I think this:
The hypothetical journey from Ward Street to Facebook HQ, although insane from my point of view, isn’t all that uncommon among tech workers in general.
Public transport, although cheaper than driving due to government subsidies, is slower according to Google maps (if they can’t provide accurate info in the commute radius of their top employees, I’ll be very suprised). This seems to be in line with the regular gripes that wonder across my tumblr dash about how slow and unreliable it is, something you also point out in that post
The Bay Area’s public transit system is really really good compared to public transit in most of the rest of the country (for one thing, it is possible to get places on it). However, our public transit is certainly inferior to, say, New York City’s. One of the ways this works is that sometimes, based on the Inscrutable Whim of the Train Gods, the train will choose to show up fourteen minutes late.
Looking at the actual data Google gives, we get a estimated commute time of 50m − 1h40m on a typical workday (I used 1h30m as the figure, as a number towards the higher end of the range wouldnt mean the hypothetical person wouldn’t be late to work half of the time)
That same journey on public transport, without delays or missed connections from the previous one being late is 1h45m, five minutes longer than the worst case estimate for driving.
Do you have local information that would contradict this?
There seems to be a difference of opinion on what applied rationality means. In my view, CFAR is at least one step removed from helping you be more rational at life. In a sense, CFAR is the doctor who gives you the antidepressants which you take to improve your own life, rather than the people who improve your life directly—the tools that let you make your own tools.
There’s no law of the universe saying if you teach someone literary critisism instead of writing it won’t improve their writing skills. The concerns are around how effective it is and whether this will end up causing the curriculum to diverge with reality futher, due to being harder to measure output.
Then there is also the question of bias. There is no control group, no objective measurement, you likely paid thousands of dollars to attend and it was run by people you respect, this is hardly the standard practice of a scientific experiment.
The Bay Area has an unusually left-leaning political culture, more so than almost any other place in the U.S. This on the one hand gets you Berkeley’s riots, and on the other hand gets you (for example) more queer/trans acceptance than anywhere else in the U.S.
Being in a more politically divided/centrist place might be better in some ways, but it would also be less full of queer community and resources and events, and possibly less accepting too.
This is a very US-centric perspective. That dynamic is just not how it works over here. We don’t have a bible belt and most people have been atheist for two generations now. There’s a reason Richard Dawkins doesn’t preach here.
Manchester has the third highest LGBT percentage after Brighton and London, according to the measures I can find. As such, there are plenty of queer/trans/kink/poly events & resources.
Arguably Manchester is better than the Bay Area for this in some regards e.g. harassment on public transit, which is particularly relevant for nonwhite trans people who don’t pass very well.
LW2.0 in its current form isn’t ideal for creating measurable change in people on the object level, but if it promotes people to read more of the rationality material that isn’t currently popular blog posts, that’s a force for good.
If I was in charge, I’d divide the site up into cause areas and have things be tagged for which cause area they have relevance to. Possible categories:
Otherwise interesting posts
This would allow multiple cause areas to benefit from a shared audience and mitigate most of the stepping on each other’s toes you get when each cause area is competing for dominance over the feed.
I’d also have more focus on wiki-like information distribution. Currently there is little effort being put into wikis and little status awarded for contributing to them, so they are currently inferior to blog posts written by a single author at a specific moment in time, but it doesn’t have to stay that way.
So there is an enormous cultural failure because no one wrote a blog post containing knowledge that is primarily of interest to Bendini?
Surely I’m not the only one who would want accurate information about an area if I was considering moving to it and not have to play twenty questions with the person who lives there, assuming I know what to ask? (e.g. “do homeless people by any chance defecate on the street?” is not a question I’d intuitively ask, even though the question is quite relevant)
In fact, Bendini did reblog and comment on a post I wrote, In Defense of Unreliability, in which I discussed the fact that I get places through trains and Uberpool. Perhaps he simply assumed I was a very unusual person, or perhaps he forgot, or perhaps he didn’t bother to read the post he was commenting on, but either way this doesn’t make me very optimistic about the plan where Bay Area rationalist bloggers transform into the Bay Area travel bureau instead of Bendini taking responsibility for not making glaring mistakes.
I did read and reblog that, yes. Consider a passage from your essay:
However, I do want to explain why I myself am quite unreliable and how I benefit from a social norm in which this unreliability is acceptable. (We should also note that I have lived in the Bay for the majority of my adult, actually-socializing life, so I may be unfamiliar with the benefits of a non-flake lifestyle.)
And a passage from mine:
When a negative attribute present in some individuals becomes woven into the cultural fabric, it becomes much more difficult to unravel. Even if it makes the community worse off on the whole, individuals can benefit in ways analogous to special interest groups. People with the trait that was previously frowned upon now get accommodations around it, ranging from a free pass to continue the behaviour, to resources being spent in order to limit its repercussions.
You are welcome to have this mutual “random flaking is allowed” agreement, but a widespread acceptance that this is the way things should be impacts anyone trying to do something important. Imagine a startup trying to operate on a policy of “yeah just come to work whenever you feel like it, don’t wory about picking up the phone or respond to emails, just do what you want and we will have to work around it I guess”
This is one of the reasons for-profit things are far more successful, it’s not just the ability to get people to do the unglamorous work by paying them money, but the set of norms for what is and isn’t acceptable. The problem is that projects not run as for-profit buisnesses flounder because people don’t actually follow for-profit norms like showing up when you say you will.
This is bad if you acnowledge the existence of projects which are a poor fit for the for-profit model, as things like this make them far less successful.
First of all, the thing I was trying to communicate was what kind of candidate you could get in the UK, the general gist being “someone who doesn’t have some limiting-reagent crippling their employment oppertunities to the point that $40k/year in Berkeley is the best offer that highly gifted candidate has”. (I have corrected it as such)
I didn’t say masters degree, I said masters degree level of general knowledge, big difference.
If you can’t tell someone is greatly held back from the details of their work history without asking questions about their mental health, then any mental illness is well-managed enough that it ceases to matter.
It should also go without saying that if you can get someone to do it in Berkeley for $40k, you can get the equivilent here for $15k. If the going rate is $65k+bonus+health insurance, I’d say don’t take for granted that someone’s promise that “yeah I’ll homeschool five kids for $50k minus premesis rent and employment-side costs” will actually happen, and if it does, they won’t stay in that role for anywhere near the time you’d want them to.
If your $50k-premisis rent system relies on ingroup friends being given a stipend for a fulltime job, that’s fine, but it is not comparable to what we would be able to do i.e. pay someone market rates without it being any kind of favour. If you want me to make estimates using the same degree of optimistic projections then we’re looking at around $10k all in if the teacher lived in the homeschool, had two roommates who worked fulltime, didn’t have to pay health insurance and just got all their living expenses paid plus a £200/month stipend. It could be done in theory, but it would require every optimistic assumption to be true.
We could shuffle the numbers about a bit, quadruple the class size and offer them $100k. But as a rule, if you wan’t to not get blindsided by cost overruns, you need to be quite pessimistic when making financial assumptions.