This reminds me that not everyone knows what I know.
If you work with distributed systems, by which I mean any system that must pass information between multiple, tightly integrated subsystems, there is a well understood concept of maximum sustainable load and we know that number to be roughly 60% of maximum possible load for all systems.
I don’t have a link handy to show you the math, but the basic idea is that the probability that one subsystem will have to wait on another increases exponentially with the total load on the system and the load level that maximizes throughput (total amount of work done by the system over some period of time) comes in just above 60%. If you do less work you are wasting capacity (in terms of throughput); if you do more work you will gum up the works and waste time waiting even if all the subsystems are always busy.
We normally deal with this in engineering contexts, but as is so often the case this property will hold for basically anything that looks sufficiently like a distributed system. Thus the “operate at 60% capacity” rule of thumb will maximize throughput in lots of scenarios: assembly lines, service-oriented architecture software, coordinated work within any organization, an individual’s work (since it is normally made up of many tasks that information must be passed between with the topology being spread out over time rather than space), and perhaps most surprisingly an individual’s mind-body.
“Slack” is a decent way of putting this, but we can be pretty precise and say you need ~40% slack to optimize throughput: more and you tip into being “lazy”, less and you become “overworked”.
If I can be direct since you often are, at least for myself I appreciate that you disagree, but I really dislike the way you do it. In particular you are often unnecessarily confrontational in ways that end up insulting other people on the site or assume bad faith on the part of your interlocutors.
For example you already did this in the comments on this post when you replied to Oli to say he was wrong about his own motivation. I think it’s fair to point out that someone may be mistaken about their own motivations, but you do it in a way that shuts down rather than invites discussion. Whatever your intended effect, it ends up reading like your motivation is to score points, in your own words, “in a way trivially predictable by monkey dynamics”, and makes comments on LW feel like a slightly more hostile place.
I’m in favor of you being able to participate because your comments have at times proved helpful, and calling out that which you disagree with is important for the health of the site, but only if you can do so in a way that leads to productive discussion rather than threatening it.
You get to deal with this a lot in engineering, and it’s also notable that the level of advice people ask for is often correlated with their skill level, with the least skilled people most often asking for operational advice on up to the most skilled people most often asking for mission advice, though this largely seems to be a matter of where the frontier of skill is for them. Part of the role as a more experienced engineer is to notice when less skilled engineers are asking questions at the wrong level and gradually help them move up to the right level of advice request.
For example, I work in software, so a common question from a new engineer might be to come to me and ask me a question about how to write a code that does something concrete like detect a string matching a pattern in the larger goal of converting data from one format to another. It usually makes sense to answer their initial question (“here’s a regex you could write” or “here’s a fast way to match to this based on prefix”), but then usually there is more to be considered. Why do you need to match this string? Could you use a library to do this conversion instead or at least do most of the heavy lifting (tactical advice)? Why do you need to do this conversion anyway (strategic advice)? And what customer value are we trying to deliver with this code (mission)? It’s not all that surprising to find that the operational question really points to a larger issue that needs to be addressed that will solve the operational question by replacing it with something entirely different. The intuition then becomes if you’re having to work really hard to make something work maybe you should step back and see if you’re sure you’re working on the right thing.
I find people are pretty open to this line of questioning as long as you come to listen and not to judge. If you’re going to ask why, even at the operational level, people are generally much more receptive to advice if you come fully open to the possibility that they are already doing the right thing and just needed confirmation. This turns the situation around to something the advice giver can do something about, namely how can you give your advice such that you can encourage the asking to consider the question at what you consider the right level while fulfilling their need for advice at the level they are asking for.
Reading this I was reminded of something. Now, not to say rationality or EA are exactly religions, but the two function in a lot of the same ways especially with respect to providing shared meaning and building community. And if you look at new, not-state-sponsored religions, they typically go through an early period where they are are small and geographically colocated and only later have a chance to grow after sufficient time with everyone together if they are to avoid fracturing such that we would no longer consider the growth “growth” per se and would more call it dispersion. Consider for example Jews in the desert, English Puritans moving to North America, and Mormons settling in Utah. Counterexamples that perhaps prove the rule (because they produced different sorts of communities) include early Christians spread through the Roman empire and various missionaries in the Americas.
To me this suggests that much of the conflict people feel today about Berkeley is around this unhappiness at being rationalists who aren’t living in Berkeley when the rationality movement is getting itself together in preparation for later growth, because importantly for what I think many people are concerned about this is a necessary period that comes prior to growth not to the exclusion of growth (not that anyone is intentionally doing this, but more this is a natural strategy that communities take up under certain conditions because it seems most likely to succeed). Being a rationalist not in Berkeley right now probably feels a lot like being a Mormon not in Utah a century ago or a Puritan who decided to stay behind in England.
Now, if you care about existential risk you might think we don’t have time to wait for the rationality community to coalesce in this way (or to wait to see if it even does!), and that’s fair but that’s a different argument than what I’ve mostly heard. And anyway none of this is necessarily what’s actually going on, but it is an interesting parallel I noticed reading this.
Nice. I picked up something like this idea from Anna. Basically the idea is that the words that help one person are not the words that help another, and so if you come up with different words to describe the same thing you should share them in case they help someone else. Thus there is value in writing your own self-help, even if it only helps yourself, since usually it also helps others.
Put more poetically, it doesn’t take great genius to help somebody; it just takes being enough like them and knowing something they don’t that you can explain it to them in a way that they understand.
Interesting. I don’t really know enough to respond beyond to say it sounds like there is more going on that is not being disclosed.
This is the sort of project I might be willing to take the lead on if folks think it’s valuable (I’m currently uncertain enough to commit), but also others might be better positioned to take it over if they find it valuable. I put that out there just to make clear that I’ll do it if it seems worth doing, but if someone else is especially excited about the idea I invite you to “steal” it from me and run with it since my time is, alas, currently expensive, I have a lot of other commitments already, and I like doing philosophy in my spare time rather than more work of the sort I normally get paid for.
In sum, the executive thread in my brain did everything in its power to shut itself off.
In sum, the executive thread in my brain did everything in its power to shut itself off.
Literally gave me chills!
Sometimes when I’m sitting zen I find myself wanting to be doing almost anything else. I keep having thoughts about wanting the bell to ring and the period to be over. Just being is too much for the conscious mind! But, as you note, this is tantamount to a wish for, if not death, then at least temporary respite from the burden of living.
I don’t quite know what to do with that, but I will say I am alternatively at peace when sitting when I am able to sit in open awareness and the self falls away. I don’t fail to notice the moments—it’s not like I’m in a trance or asleep—but I also don’t want them to go any faster or slower. They can just be.
I’ll also note that I do something similar to you it sounds like when my mind wants to turn away from what it would otherwise do: I let my mind wander. I find I’m better rewarded for my time with a refreshed mind and new ideas than if I spent my time escaping from myself.
I fail to see why we cannot both speak to what we think is true and do so in a civil way.
Some years ago I got interested in the Yi Jing after reading Philip K. Dick’s The Man in the High Castle, which features the Yi Jing prominently where the book within the book (which is the alternate dimension/history version of The Man in the High Castle) is written by using the Yi Jing to make plot decisions and one of the characters relies on it heavily to navigate life. I went on to write a WebOS Yi Jing phone app so I could more easily consult it from my phone and played around with it myself.
My experience of it was mostly that it offered me nothing I wasn’t already doing on my own, but I could see how it would have been helpful to others who lack my particular natural disposition to letting my mind go quiet and seeing what it has to tell me. As you note, it seems a good way to be able to step back and consider something from a different angle, and to consider different aspects of something you may be currently ignoring. The commentary on the Yi Jing is carefully worded such that it’s more about the decision generation process than the decision itself, and when used well I think can result in the sort of sudden realization of the action you will take the same way my sitting quietly and waiting for insight does.
I also know a decent number of rationalists who enjoy playing with Tarot cards for seemingly this same reason. Tarot works a bit different because it more tells a story than highlights a virtue, but I think like you much of the value comes from placing an random framing on events, injecting noise into an otherwise too stable algorithm, and helping people get out of local maxima/minima traps.
I’d also include rubber ducking as a modern divination method. I think it does something similar, but by using a different method to get you to see things more clearly and find out what you already implicitly knew but weren’t making explicit enough to let it have an impact on your actions. My speculation at a possible mechanism of action here is something like what happens when I sit quietly with a decision and wait for an answer: you let the established patterns of thought get out of the way and let other things come through so you can consider them, in part because you can generate your own internal noise if you stop trying to direct your thought. But not everyone finds this easy or possible, in which case more traditional divination methods with external noise injection are likely useful.
A (Semi) conscious policy, in which Berkeley is kinda trying to keep their numbers down or their average person quality up. There is something to say for this policy, though I would prefer that Berkeley people are honest about it
I hadn’t thought about it being perceived this way. As perhaps with many communities, it sometimes happens that there are more people than can be accomodated in an event or who can fit within the active social network of a person (cf. Dunbar number). As a result if you are running an event or just trying to keep tabs on your friends (your tribe) you only have room for so many and have to make a cut somewhere (not necessarily consciously but somewhere people will fade out of your perview). This unfortunately means it’s not as easy to “get in” with the Berkeley crowd because many events and people are already at their limits so a new person coming in requires an existing person going out.
Now there’s plenty of natural churn—people move, interests change, etc.—and this offers opportunities for new people to come in without displacing anyone, but at the margins there is definitely going to be some competition to stay in the tribe. Like, if you’re the 100th person I think of you’re at more risk of not getting an invite than if you are the 10th person I think of, and being 100th you are at more risk of being forgotten because I recently met someone new or just haven’t talked to you in a while. And if you find yourself feeling you are on the edges of the tribe it can be distressing to have to work to stay close enough to the fire to remain warm.
Anyway this is all to say that the Berkeley rationalists are of such a size that they naturally exhibit behavior patterns matching those of a human tribe. I don’t know if I would call this a “policy” though, and certainly many rationalists, being humans, are unaware they are engaged in these social dynamics such that they might say hypocritical things to signal status, membership, etc..
Perhaps the good news is that there’s a natural counterbalance to this that I already see happening: tribal split. That is, we’re big enough in Berkeley and the Bay Area that I feel like we’re developing at least 2 if not 3 tribes. The details are still fuzzy because we’re not quite big enough to force a solid split and I expect there to always be plenty of cross-over because we are all part of the same clan (I realize now I have my use of tribe and clan reversed...), but this is naturally what will happen if we grow as a community.
Like with other small, tight-nit groups, rationalists will be welcome anywhere rationalists congregate, but only so many folks can be members of a particular congregation at the same time.
Your solutions remind me a lot of what we in the business world think of as protecting the business against existential risk from the failure of individual human resources, cutely measured as a “bus factor” for the number of people who have to get hit by a bus before your business is in trouble. A great source of information about techniques for dealing with this, at least within a small business context, has for me been The E-Myth by Gerber, but you can also look at how continuity planning is done (although there the focus is usually on seemingly exceptional rather than normal events (though of course the exceptional is normal, and that’s the whole point of doing continuity planning)).