This reminds me that not everyone knows what I know.
If you work with distributed systems, by which I mean any system that must pass information between multiple, tightly integrated subsystems, there is a well understood concept of maximum sustainable load and we know that number to be roughly 60% of maximum possible load for all systems.
I don’t have a link handy to show you the math, but the basic idea is that the probability that one subsystem will have to wait on another increases exponentially with the total load on the system and the load level that maximizes throughput (total amount of work done by the system over some period of time) comes in just above 60%. If you do less work you are wasting capacity (in terms of throughput); if you do more work you will gum up the works and waste time waiting even if all the subsystems are always busy.
We normally deal with this in engineering contexts, but as is so often the case this property will hold for basically anything that looks sufficiently like a distributed system. Thus the “operate at 60% capacity” rule of thumb will maximize throughput in lots of scenarios: assembly lines, service-oriented architecture software, coordinated work within any organization, an individual’s work (since it is normally made up of many tasks that information must be passed between with the topology being spread out over time rather than space), and perhaps most surprisingly an individual’s mind-body.
“Slack” is a decent way of putting this, but we can be pretty precise and say you need ~40% slack to optimize throughput: more and you tip into being “lazy”, less and you become “overworked”.
If I can be direct since you often are, at least for myself I appreciate that you disagree, but I really dislike the way you do it. In particular you are often unnecessarily confrontational in ways that end up insulting other people on the site or assume bad faith on the part of your interlocutors.
For example you already did this in the comments on this post when you replied to Oli to say he was wrong about his own motivation. I think it’s fair to point out that someone may be mistaken about their own motivations, but you do it in a way that shuts down rather than invites discussion. Whatever your intended effect, it ends up reading like your motivation is to score points, in your own words, “in a way trivially predictable by monkey dynamics”, and makes comments on LW feel like a slightly more hostile place.
I’m in favor of you being able to participate because your comments have at times proved helpful, and calling out that which you disagree with is important for the health of the site, but only if you can do so in a way that leads to productive discussion rather than threatening it.
You get to deal with this a lot in engineering, and it’s also notable that the level of advice people ask for is often correlated with their skill level, with the least skilled people most often asking for operational advice on up to the most skilled people most often asking for mission advice, though this largely seems to be a matter of where the frontier of skill is for them. Part of the role as a more experienced engineer is to notice when less skilled engineers are asking questions at the wrong level and gradually help them move up to the right level of advice request.
For example, I work in software, so a common question from a new engineer might be to come to me and ask me a question about how to write a code that does something concrete like detect a string matching a pattern in the larger goal of converting data from one format to another. It usually makes sense to answer their initial question (“here’s a regex you could write” or “here’s a fast way to match to this based on prefix”), but then usually there is more to be considered. Why do you need to match this string? Could you use a library to do this conversion instead or at least do most of the heavy lifting (tactical advice)? Why do you need to do this conversion anyway (strategic advice)? And what customer value are we trying to deliver with this code (mission)? It’s not all that surprising to find that the operational question really points to a larger issue that needs to be addressed that will solve the operational question by replacing it with something entirely different. The intuition then becomes if you’re having to work really hard to make something work maybe you should step back and see if you’re sure you’re working on the right thing.
I find people are pretty open to this line of questioning as long as you come to listen and not to judge. If you’re going to ask why, even at the operational level, people are generally much more receptive to advice if you come fully open to the possibility that they are already doing the right thing and just needed confirmation. This turns the situation around to something the advice giver can do something about, namely how can you give your advice such that you can encourage the asking to consider the question at what you consider the right level while fulfilling their need for advice at the level they are asking for.
Reading this I was reminded of something. Now, not to say rationality or EA are exactly religions, but the two function in a lot of the same ways especially with respect to providing shared meaning and building community. And if you look at new, not-state-sponsored religions, they typically go through an early period where they are are small and geographically colocated and only later have a chance to grow after sufficient time with everyone together if they are to avoid fracturing such that we would no longer consider the growth “growth” per se and would more call it dispersion. Consider for example Jews in the desert, English Puritans moving to North America, and Mormons settling in Utah. Counterexamples that perhaps prove the rule (because they produced different sorts of communities) include early Christians spread through the Roman empire and various missionaries in the Americas.
To me this suggests that much of the conflict people feel today about Berkeley is around this unhappiness at being rationalists who aren’t living in Berkeley when the rationality movement is getting itself together in preparation for later growth, because importantly for what I think many people are concerned about this is a necessary period that comes prior to growth not to the exclusion of growth (not that anyone is intentionally doing this, but more this is a natural strategy that communities take up under certain conditions because it seems most likely to succeed). Being a rationalist not in Berkeley right now probably feels a lot like being a Mormon not in Utah a century ago or a Puritan who decided to stay behind in England.
Now, if you care about existential risk you might think we don’t have time to wait for the rationality community to coalesce in this way (or to wait to see if it even does!), and that’s fair but that’s a different argument than what I’ve mostly heard. And anyway none of this is necessarily what’s actually going on, but it is an interesting parallel I noticed reading this.
Nice. I picked up something like this idea from Anna. Basically the idea is that the words that help one person are not the words that help another, and so if you come up with different words to describe the same thing you should share them in case they help someone else. Thus there is value in writing your own self-help, even if it only helps yourself, since usually it also helps others.
Put more poetically, it doesn’t take great genius to help somebody; it just takes being enough like them and knowing something they don’t that you can explain it to them in a way that they understand.
Interesting. I don’t really know enough to respond beyond to say it sounds like there is more going on that is not being disclosed.
This is the sort of project I might be willing to take the lead on if folks think it’s valuable (I’m currently uncertain enough to commit), but also others might be better positioned to take it over if they find it valuable. I put that out there just to make clear that I’ll do it if it seems worth doing, but if someone else is especially excited about the idea I invite you to “steal” it from me and run with it since my time is, alas, currently expensive, I have a lot of other commitments already, and I like doing philosophy in my spare time rather than more work of the sort I normally get paid for.
In sum, the executive thread in my brain did everything in its power to shut itself off.
In sum, the executive thread in my brain did everything in its power to shut itself off.
Literally gave me chills!
Sometimes when I’m sitting zen I find myself wanting to be doing almost anything else. I keep having thoughts about wanting the bell to ring and the period to be over. Just being is too much for the conscious mind! But, as you note, this is tantamount to a wish for, if not death, then at least temporary respite from the burden of living.
I don’t quite know what to do with that, but I will say I am alternatively at peace when sitting when I am able to sit in open awareness and the self falls away. I don’t fail to notice the moments—it’s not like I’m in a trance or asleep—but I also don’t want them to go any faster or slower. They can just be.
I’ll also note that I do something similar to you it sounds like when my mind wants to turn away from what it would otherwise do: I let my mind wander. I find I’m better rewarded for my time with a refreshed mind and new ideas than if I spent my time escaping from myself.
I fail to see why we cannot both speak to what we think is true and do so in a civil way.