When people say something helped them a lot, how much did it actually help?
My guess is that people are likely to overestimate this. Like, imagine that life has 1000 different aspects you need to get in order, and one day you find something that makes you better at one of them by 10%.
From your perspective at the moment, it probably feels like a lot. You probably spent a lot of time in the past practicing this thing, with mixed success… and suddenly it improves by 10% almost overnight? That’s wonderful! And because you are focusing on this thing at the moment, it feels like a very important thing.
Globally, increasing one of 1000 things by 10% means improving your life by 0.01%. That’s practically invisible from outside. Yeah, you are now better in one thing, but the other 999 things remained the same.
And you don’t have the same success every day, so an improvement by 0.01% in a day doesn’t translate into a 3% improvement in a year. You probably can’t even repeat the same success in the same thing, because you get diminishing returns.
Numbers obviously made up to illustrate the point.
So when people say something helps them a lot (whether it is the same thing for years, or a different thing every week), I expect something like this to happen. Maybe it feels like a huge change from inside, at the moment when they are focusing on the one thing that improved. But from outside, I don’t expect to see a dramatic change soon.
And it’s not just when other people tell me about their successes. It took me a few dozen epiphanies to realize that even a few dozen epiphanies won’t turn me into a superman. One epiphany achieves even less.
To make an analogy with exercise, what helps is actually doing the exercise over and over again, several times a week, for years. Just one afternoon spent exercising hard changes nothing.
Miracles are cheap, integrating them into your daily routine is hard?
Definitely, but why limit it to just rationalists in that case?
Not sure how well a mixed group of rationalists and non-rationalists would function. But you could create more than one group.
Were people in the USSR getting barred from their constitutional duty to work?
You could be fired from your job and then put into prison for violating your constitutional duty, and no one would care.
But in practice, you were supposed to find a job that was sufficiently low-status, or was dangerous for health, or something like that. Such jobs were allowed to hire even “politically unreliable” people. (Refusing to take one of those jobs, that would be a violation of your constitutional duty.)
Being a rationalist is not the only trait the individual rationalists have. Other traits may prevent you from clicking with them. There may be traits frequent in the Bay Area that are unpleasant to you.
Also, being an aspiring rationalist is not a binary thing. Some people try harder, some only join for the social experience. Assuming that the base rate of people “trying things hard” is very low, I would expect that even among people who identify as rationalists, the majority is there only for the social reasons. If you try to fit in with the group as a whole, it means you will mostly try to fit in with these people. But if you are not there primarily for social reasons, that is already one thing that will make you not fit in. (By the way, no disrespect meant here. Most of people who identify as rationalists only for social reasons are very nice people.)
What you could do, in my opinion, is find a subgroup you feel comfortable with, and accept that this is the natural state of things. Also, speaking as an introvert, I can more easily connect with individuals than with groups. The group is simply a place where I can find such individuals with greater frequency, and conveniently meet more of them at the same place.
Or—as you wrote—you could create such subgroup around yourself. Hopefully, it will be easier in the Bay Area than it would be otherwise.
I wonder how much the “great loneliness for creatures like us” is a necessary outcome of realizing that you are an individual, and how much it is a consequence of e.g. not having the kinds of friends you want to have, i.e. something that you wouldn’t feel under the right circumstances.
From my perspective, what I miss is people similar to me, living close to me. I can find like-minded people, but they live in different countries (I met them on LW meetups). Thus, I feel more lonely than I would feel if I lived in a different city. Similarly, being extraverted and/or having greater social skills could possibly help me find similar people in my proximity, maybe. Also, sometimes I meet people who seem like they could be what I miss in my life, but they are not interested in being friends with me. Again, this is probably a numbers game; if I could meet ten or hundred times more people of that type, some of them could be interested in me.
(In other words, I wonder whether this is not yet another case of “my personal problems, interpreted as a universal experience of the humankind”.)
Yet another possible factor is the feeling of safety. The less safe I feel, the greater the desire of having allies, preferably perfect allies, preferably loyal clones of myself.
Plus the fear of death. If, in some sense, there are copies of me out there, then, in some sense, I am immortal. If I am unique, then at my death something unique (and valuable, at least to me) will disappear from this universe, forever.
Depends on situation. Sometimes people can do things independently on each other. Sometimes people do things together because it is more efficient that way. And sometimes people do things together because there is an artificial obstacle that prevents them from making things individually. (In other words, mazes are trying to change the world in a way that makes mazes mandatory.)
As a made-up example, imagine that there are three cities, and there is a shop in each city, each shop having a different owner. (It is assumed that most people buy in their local shop.) Maybe the situation is such that it would be more profitable if there is only one shop chain operating in all three cities. But maybe there is a shop chain successfully lobbying to make it illegal to own individual shops. Or not literally illegal, but perhaps they propose a law that imposes a huge fixed cost on each shop or shop chain, so the owner of one shop would have to pay this tax per shop, while the owner of a chain only has to pay it once per entire chain. Such law could make the shop chains more profitable than uncoordinated shops, even in situations where without that law they might be less profitable.
So, we have two levels of the game here: What is more profitable assuming no artificial obstacles. And what is more profitable when players are allowed to lobby for creating artificial obstacles for competitors using a different strategy. (That is, suppose that the state is not corrupt so much that it would not make a law that makes life specificially easy for corporation A and difficult for an equivalent corporation B, but it can be convinced to make a law that makes life easier for certain types of corporations and more difficult for other types. So the corporation A cannot use the law as a weapon against an equivalent corporation B, but e.g. large companies could use the law as a weapon against small companies. Creating a large fixed cost for everyone is a typical example.)
To answer your question, maybe sometimes things suck because there are more people, but sometimes things only suck because mazes have the power to change the law to make things suck.
It’s like the power of an organization is a square root or perhaps only a logarithm of how many people work for it. It is horrible to see the diminishing returns, but larger still means stronger.
Maybe this is the actual reason why centralized economy sucks. Not because of mere lack of information (as Hayek assumed), because in theory the government could employ thousands of local information collectors, and process the collected data on computers. But it’s the maze-nature that prevents it from doing this in a sane way. The distributed economy wins, despite all its inefficiencies (everyone reinventing the copyrighted wheels, burning money in zero-sum games, etc.), because the total size of all mazes is smaller.
But in long term, the successful mazes try to convert the entire country into one large maze, by increasing regulation, raising fixed costs of doing stuff, and doing other things that change the playground so that the total power matters more than the power per individual.
I suppose that increase in mazes means that if there is external pressure that appears politically fashionable, more people in the positions of relative power are motivated to (appear to) move in the direction of the pressure, whatever it is, because they don’t really care either way. This is how companies become woke, ecological, etc. (At least in appearance, because they will of course Goodhart the shit out of it.)
A different question is, why pressure in the direction of e.g. social justice is stronger than pressure in direction of e.g. Christianity. More activists? Better coordination? Strategic capture of important resources, such as media? Or maybe it is something completely different, e.g. social justice warriors pay less attention when their goals are Goodharted? (Firing one employee that said something politically incorrect is much cheaper than e.g. closing the shops on Sunday.) Before you say “left vs right”, consider that e.g. veganism is coded left-wing, but we don’t hear about companies turning vegan under external pressure. Or perhaps it’s all just a huge Keynesian beauty contest, where any thing, once successful, becomes fixed, and the social justice warriors just had lucky timing. I don’t know.
Another relativistic argument against time flowing is that simultaneity is only defined relative to a reference frame. Therefore, there is no unified present which is supposed to be what is flowing.
Relativity does not make the arrow of time relative to observer. Events in one’s future light cone remain in their future light cone also from a perspective of someone else.
Even if most people on LW are probably familiar with the abbreviation, someone may come here following a link from elsewhere.
There is also the question of how soon to cut the cord. The reason for cutting it a bit later is that the blood from the cord still keeps flowing into the baby. Unfortunately, I completely forgot why those few extra drops are supposed to be so important, but I was told the reason years ago and it sounded just as important as the reason for storing the cord blood.
Hello, anonymous person posting an article called “MattG’s Shortform”. :D
Related, has anyone compiled a list of “Rationalist Wisdom”? Like a bunch of sayings that distill Rationalism down that we can point newbs to?
Writing is a skill; you can’t simply decide to do it and automatically do it well, even if you believe it is an important thing to do. I hope that in future, some people with sufficiently high writing skills will become rationalists, and one of them will prioritize making simple accessible rationality materials for beginners.
More precisely, writing is more than one skill. I mean, Eliezer definitely is good at writing—the success of HPMoR is an evidence for that—and yet it’s his Sequences that people complain about. Seemingly, “good at blogging” and “good at writing fiction” doesn’t imply “good at writing textbooks for beginners”. So it’s the person good at writing textbooks for beginners we are waiting for, to join the rationality community and produce the textbooks.
Yep. Looking around me, getting Slovakia out of EU would be relatively easier task than making it adopt UBI, for the reasons you mentioned (plus one you didn’t: availability of foreign helpers).
Burning down a building is easier than constructing it.
People are celebrating Dominic Cummings for changing the building. I’d like to wait until it turns out what specific kind of change it was.
In the meanwhile, I accept the argument that even burning down the building requires more skills and agency than merely talking about the building. In this way, Dominic Cummings has already risen above the level of the rationalist plebs. But how high, that still remains to be seen.
There is something in the process there that ought to be emulated, even if you disagree with the instrumental outcome.
I see your point, but the outcome is important, if you want to improve things, not just become famous for changing them.
If I may offer my opinion, it seems to me that this debate was a proxy for a long-term problem, which I would roughly describe as “how much exactness should be the norm on LW?”.
When Eliezer was writing the Sequences, it was simple: whatever he considered right, that was the norm. There were articles with numbers and equations, articles that quoted scientific research, articles that expressed personal opinion or preference, and articles with fictional evidence. And because all those articles came from the same person, together they created the style that has attracted many readers.
But, now that it is a community blog, there are people with preference for numbers and equations, and people with preference for personal opinion. It’s like they speak different languages. And sometimes they disagree with each other. And when they do, it is difficult to resolve the situation, because each of them expects different norms of… what kind of argument is valid, and what kind of content belongs here.
If we limit ourselves to things we can define and describe exactly, the extreme of that would be merely discussing equations. Because the real world is messy and complicated, and people are even more messy and complicated. And there is nothing wrong with the equations—the articles on math or decision theory are great and definitely a part of the LW intellectual tradition—but we also want to use rationality in real life, as humans, in interaction with other humans, and we want to optimize this, even if we cannot describe it exactly.
The opposite extreme, obviously, is introducing all kinds of woo. Meditation feels right, and Buddhism feels right, and Circling feels right, and… dunno, maybe tomorrow praying will feel right, and homeopathy will feel right. (And even if they won’t, the question is what algoritm will draw the line. Is it “I was introduced to it by a person identifying as a rationalist” vs “I have already seen this done by people who don’t identify as rationalists”?)
I would like this community to retain the ability to speak both languages. But it doesn’t work well when different people specialize in different languages. At best, it would be a website that hosts two kinds of completely unrelated topics. At worst, those two groups would attack each other.
I think of Schelling points as the the things that result without specific coordination, but only common background knowledge.
Yes, but specific coordination today can create the common background knowledge for tomorrow.
Similarly to Eliezer, I am impressed to see someone who “speaks our tribe’s language” to be in a position of political power, but also confused why their list of achievements contains (or consists entirely of) Brexit.
To me it seems like the original strategy behind Brexit referendum was simply “let’s make a referendum that will lose, but it will give us power to convert any future complaints into political points by saying ‘we told you’”. And when the referendum succeeded, it became obvious that no one actually expected this outcome, and people tasked with handling the success are mostly trying to run away and hide, wait for a miracle, or delegate the responsibility to someone else. (Because now it puts them into the position where any future complaints will generate political points for their opponents. And future complaints are inevitable, always.)
I expect that as soon as Brexit is resolved in either way—i.e. when the decision about staying or leaving is definitely made, and the blame for it is definitely assigned—the situation will revert to politics as usual.
Just a random thought: This could also explain why rationality and depression seem to often go together. Rational people are more likely to notice things that could go wrong, uncertainty, planning fallacy, etc. -- but in this model those are mostly things that assign lower probability to success.
Even in the usual debates about “whether rationality is useful”, the usual conclusion is that rationality won’t make you win a lottery (not even the startup lottery), but mostly helps you to avoid all kinds of crazy stuff that people sometimes do. Which from some perspective sounds good (imagine seeing a long list of various risks with their base rates, and then someone telling you “this pill will reduce the probability of each of them to 10% of the original value or less”), but is also quite disappointing from the perspective of wanting strong positive outcomes (“will rationality make me a Hollywood superstar?” “no”; “a billionaire, then?” “it may slightly increase your chance, but looking at absolute values, no”; “and what about …?” “just stop, for anything other than slightly above average version of ordinary life, the answer is no”). Meanwhile, irrationality tells you to follow your passion, because if you think positively, success is 100% guaranteed, and shouldn’t take more than a year or two.