I suspect that we shouldn’t talk about “work” in abstract, because the details matter a lot. People can feel tired after returning from a job, and yet work hard the whole weekend on their hobby without getting tired.
To me it feels like after my brain accumulates a certain amount of frustration, it stops being efficient, until it gets a reset. The reset may be simply taking a lunch, especially if I succeed to forget about my work during the lunch.
Difficulty of work is not a problem per se. This weekend I studied a lot about jQuery and ePub, for fun. My work usually isn’t more difficult than that. What makes me feel tired is: meetings, frequent artificial deadlines, unclear assignments, communication with not-friendly people, etc. I can work 12 hours a day on my hobby. I cannot imagine working more than 8 hours a day (and I strongly wish I could work much less) on a job as usual.
Sometimes I get an impression that people on autistic spectrum “have outlived their usefulness” (TV Tropes) from the perspective of the society. There was a time, not that long ago, when normies didn’t care about computers, because using them required esoteric knowledge of things such as binary numbers, and they didn’t care about internet, because it was mostly a way to interact with people who cared about these esoteric things. To become good with computers, you had to spend a lot of time studying obsessively something that didn’t have much value in the eyes of most people.
Then it became common knowledge that IT is where the money is, and also working with computers became easier. Suddenly people with no intrinsic interest in esotetic knowledge started paying attention to IT. And now you have students of computer science who freely admit that they actually don’t like programming and consider it boring… but they are willing to do it for money (because presumably all other jobs are boring, too).
The weirdoes became a minority in the field they have created, and the social norms are turning against them. Caring about the craft already became low-status; if you care about clean code and algorithmic complexity, you are obviously not paying attention to the larger picture i.e. the buzzwords the management is most happy about recently. There are not enough resources to do anything properly (although there sometimes are resources to do the same thing over and over again as the old solutions keep falling apart under their technical debt). The social skills are more important than the technical ones. Even in open source people are kicked out of projects for being bad at political games.
Of course, there is a value in social skills, and there is a harm in excessive weirdness. People can have long unproductive wars about minutiae of formatting the source code. Lack of communication within the project can waste lot of resources. Documentation sucks when it is written by people who hate talking to others. Introducing social skills to the project should be good… if we could keep the balance. If the people with social skills could respect the people with technical skills, and vice versa. But it seems to me that after the initial resistance is broken, the pendulum swings to the opposite extreme, and suddenly we have a formerly nerdy profession where people are regularly reminded that nerds suck.
Normie-ness is a positive feedback loop; the more normies you have, the greater the pressure to eliminate the non-normies. People with better social skills will almost by definition succeed at pushing the narrative that what we really need is to give even more power to people with social skills. And when things start falling apart, instead of shutting up and fixing the code, more and more meetings are scheduled, because for a normie, talking endlessly is the preferred (and the only known) way to solve all problems.
To some degree, this is not as bad as it sounds. Software is easy to copy. You could have 99% of software projects completely dysfunctional, and the remaining 1% would still move the planet forward. Similarly, you can have million anti-vaxers, but as long as you have one Einstein, science can still move forward. One person doing the right thing is more important than millions wasting time, if the solution can be copied.
But ultimately, the resources are scarce, and the people pretending to care are competing against the people who actually care. When you get to the point where the Einstein can’t get a job, because he is outcompeted at every position by people with better social skills, then—unless he is independently wealthy (but how could he save for early retirement if he can’t get a good job?) or he has a generous sponsor (but here he also competes against people who have better social skills) -- he will not be able to work on his theory of relativity. And if only 1% of programmers care about clean code, you won’t get clean code in 1% of projects; it will be much less, because most projects are developed by teams, and you would need a majority of the team to actually care.
There are multiple reasons, and here is one of them:
Imagine yourself as a boss. How would you check whether your employess are doing the stuff you pay them for, or just taking your money and slacking? (Because there are many people who would enjoy the opportunity to take your money for nothing.)
This depends on the work. Sometimes the outputs are easy to measure and easy to predict. Suppose your employees are making boxes out of cardboard. You know how many boxes per hour can the average worker make, so you have a simple transformation of your money to the number of boxes produced. If someone does not produce enough boxes, they are either incompetent or slacking; in both cases it would make sense to replace them with someone who will produce enough boxes.
This is the type of work that would be safe to let people do remotely—as long as the same amount of boxes is produced, you get the value you paid for—although there may be other reasons that make it difficult: transportation of the cardboard and the boxes, or maybe if a machine is needed.
But imagine the kind of work like software development. To the eternal frustration of managers, the output is hard to measure. Both because of inherent randomness of the work (bugs appear unexpected and may take a lot of time to fix), and because the people who supervise the work are usually not programmers themselves (so they have no idea how much time “writing a REST controller which provides data serialized in XML format” should take—are we talking minutes or weeks?). Different people have different strong opinions on what quality means, but it is a fact that some projects can grow steadily for years, while others soon collapse under their own weight.
Having this kind of work done remotely, how do you distinguish between the case when the employee solved a difficult problem, fixed someone else’s bug, and spent some time preventing other bugs happening in the future… and the case when someone did some quick and dirty work in 2 hours, spent the remaining 6 hours watching Netflix, and afterwards reported 8 hours of work? Trying to impose some simple metric such as “lines of code written per day” is more likely to hurt than help, because it punishes useful legitimate work, such as designing, or fixing bugs.
Making the people stay in the office guarantees that they will not spend 6 hours watching Netflix. They may do good work, they may do bad work, or they may find ways to procrastinate (e.g. watch YouTube videos instead). But at least, there is a long list of things they can’t do.
It seems like a problem of trust, but on a deeper level it is a problem that you can’t even “trust but verify” if you can’t actually verify the quality of the output. So you have to rely on things like “spent enough time looking busy”, which sucks for both sides.
High status feels better when you are near your subordinates (when you can watch them, randomly disrupt them, etc.). High-status people make the decision whether remote work is allowed or not.
Something like Goodhart’s Law, I suppose. There are natural situations where X is associated with something good, but literally maximizing X is actually quite bad. (Having more gold would be nice. Converting the entire universe into atoms of gold, not necessarily so.)
EY has practiced the skill of trying to see things like a machine. When people talk about “maximizing X”, they usually mean “trying to increase X in a way that proves my point”; i.e. they use motivated thinking.
Whatever X you take, the priors are almost 100% that literally maximizing X would be horrible. That includes the usual applause lights, whether they appeal to normies or nerds.
Should humans be less disgusting? All in favor, raise your tentacle...
There is so much low-hanging fruit. Doctors don’t wash their hands consistently. Parents send sick kids to kindergartens and schools. They are told repeatedly; and they ignore it. Cost/benefit analysis? If I send my sick child to kindergarten, it’s your cost and my benefit; that’s all I need to know.
To me it helps to imagine that I am explaining the topic to someone else. If I had enough time, I would never copy the textbook; I would rewrite it using my own words, and probably change the entire structure. (In other words, instead of “paper1 → paper2”, it would go “paper1 → internal model → paper2″.) Unfortunately, doing things the way I wish takes a lot of time.
For example, if I make notes about programming, I am trying to write the simplest code that illustrates the concept in isolation from other concepts. (Most examples I find online are introducing multiple concepts at the same time. Okay, I suppose in reality, you usually use X and Y and Z together in the same project. But I still want to see X used separately, and Y separately, and Z separately. And then an example of how X and Y and Z go together.)
I would suggest to explore the concept in unusual ways. For example, when you learn about commutative operators, don’t just use “addition” and “multiplication” as obvious examples, but also think about ones like “least common multiple” or even “these words have the same amount of strokes in Chinese”. (Ultimately coming to “there is an arbitrary undirected graph, where the nodes are the possible inputs, and each edge contains an arbitrary output as a label”.)
Also, when you learn things, the value is not merely in the individual things, but also (mostly?) in their connections to other things. That is the difference between a newbie who can recite the facts but cannot apply them, and an expert who can immediately take three abstract concepts and chain them together to solve a problem. (Not sure what exactly this imples for note-taking and zettelkasten method. My preferred way to make notes would be like making wiki pages, so I would mention these connections at the bottom of the page.) For example, there are many proofs that there are infinitely many primes, but I enjoyed reading an argument how having finitely many primes would allow us to create an insane compression algorithm. (You take the input as a binary number, factorize it, and save the factors. If your input is much larger than the hypothetical largest prime, the output file size will be a logarithm of the input file size.)
So, you did something that made you feel smarter. To make sure the effect is real, you could take an IQ test, assuming you took one in the past, and compare the numbers.
I think it is relatively common to have a feeling of becoming smarter without actually being so. The change of mood already can move you from “ignores things” to “observes things and wonders about details”. Learning something gives you domain-specific knowledge. Abstract mumbo-jumbo can make you feel like you understand the deep truths about the world. Good speakers know how to induce these feelings in their audience. Crackpots can induce them in themselves.
But the feeling doesn’t necessarily correspond to reality. In fact it is often the other way round, e.g. when people are on drugs, their critical thinking turns off, and they believe themselves to be super-smart. Only, when they write down their supreme wisdom, it turns out to be garbage when they get sober. Your hormones can have similar effect, e.g. if you are super excited about something.
Try doing actual tasks that someone else gave you, and see whether you actually became better according to that other person’s criteria. Anything else is just potentially deluding yourself.
For me, “Italy” sounds convincing, because it is closer to us—I live in Europe --, geographically and culturally, than China. (Talking about China feels about as relevant as talking about Mars.)
A video from Italy, showing the crowded hospitals and soldiers on streets, would probably feel more convincing than citing numbers. (Also, this was shared on SSC.) I would only cite numbers afterwards to say something like “see, two or three weeks ago they also had only X known cases”.
I would probably try convincing along the lines of: (1) if everyone will stop their social life in two weeks anyway, we might as well do it today, and (2) many people are asymptomatic or have mild symptoms, and the incubation time is several days while people already spread the virus, so by the time you know 1 person in your neighborhood to have severe symptoms, there are probably already hundred who spread the virus.
Also, when talking about the probability of death, I would add that even “non-death” can mean a lot of pain and irreversibly damaged health.
Most people are altruistic, therefore I would emphasise “you might unknowingly infect people you care about” over “you might get sick and die”. (Also, gender stereotypes: men are socially conditioned to not worry about what happens to them, but they are supposed to protect their families.)
If your parents don’t have Skype (or equivalent) ready, install it now.
Start buying stuff for your parents even before you have convinced them. Say “I know you don’t share my worries, but knowing that you have this stuff makes me feel much better, please accept it”.
If she manages to convince them later, the supplies will already be there, so it’s definitely a good move.
Not sure whether this is what you meant, but there is a difference between a situation when resources are abundant and the reproduction is an exponential function of the speed of reproduction, and when resources become scarce and reproduction is only one important parameter along with survival and interaction with competitors.
To continue with your example, imagine that Y has faster doubling rate than X (assuming abundant resources), but X can disassemble Y to create its own copy while Y can’t do the same to X. So there will be first a period when Y exponentially outgrows X, followed by a period where Y greadually disappears.
If you want to model this by matrices or something similar, you need to somehow include this aspect.
Also, the reality will be more complicated, because the values of X and Y and their interaction may depend on local environment. So it is possible that X eliminates Y in warm waters, but Y survives around the poles. Then it is possible that X evolves into intelligent species that causes global warming… okay, this is probably outside the scope of the original question.
So far I was never protecting myself against coronavirus in summer.
Under more usual circumstances, I simply don’t think about my phone as a possible infection vector. Which is possibly a big mistake.
The wallet is usually in some bag.
From my perspective, my wallet is a part of “outside the house”. I don’t literally leave it outside, but I leave it in my coat’s pocket, and never touch it when I am inside. Now I learned to do the same thing with my keys—I open the door, put the keys in the pocket, remove the coat and hang it. Then I wash my hands. So the wallet and keys are not touched until I go for a walk again, so it’s kinda equivalent to leaving them outside.
The most problematic thing is the phone. That one I use both inside and outside, so I have to clean it a lot. (It would be nice to have two phones, where you could use one to remotely activate or deactivate the other. Then I would have an inside phone and an outside phone.)
More generally, this strategy seems like what cultures more obsessed with purity do. Instead of cleaning everything all the time, you specify various zones of cleanness, clean things when they cross the boundary in the wrong direction, and develop instincts against unthinkingly crossing the boundary in the wrong direction.
If your home is “pure” and the wallet is “impure”, then obviously you shouldn’t handle the wallet at your home, unless you carefully perform the “purification ritual”. You don’t even have to remember why the wallet is “impure”, just the fact that it is. And if you keep these rules all your life, you won’t forget it, because the though of using the wallet at your home will automatically invoke a feeling of “dirtiness”.
In my opinion:
“The closer an idea is to what you already believe the easier it is to think of it.”—Yes.
“The closer an idea is to the truth the easier it is to think of it.”—No.
These is this idea of systematic bias; of errors that all people do for the same reasons (e.g. because making this type of error often provided an evolutionary advantage, or because the neural networks are likely to make this type of error) Ideas like “there are supernatural agents that act in our world” are easy, discovering electricity is hard.
A related thing I was thinking about for some time: Seems to me that the line between “building on X” and “disagreeing with X” is sometimes unclear, and the final choice is often made because of social reasons rather than because of the natural structure of the idea-space. (In other words, the ideology is not the community; therefore the relations between two ideologies often do not determine the relations between the respective communities.)
Imagine that there was a guy X who said some wise things: A, B, and C. Later, there was another guy Y who said: A, B, C, and D. Now depending on how Y feels about X, he could describe his own wisdom as either “standing on shoulders of giants, such as X”, or “debunking of teachings of X, who was foolishly ignorant about D”. (Sometimes it’s not really Y alone, but rather the followers of Y, who make the choice.) Two descriptions of the same situation; very different connotations.
To give a specific example, is Scott Alexander a post-rationalist? (I am not sure whether he ever wrote anything on this topic, but even if he did, let’s ignore it completely now, because… well, he could be mistaken about where he really belongs.) Let’s try to find out the answer based on his online behavior.
There are some similarities: He writes a blog outside of LW. He goes against some norms of LW (e.g. he debates politics). He is admired by many people on LW, because he writes things they find insightful. At the same time, a large part of his audience disagrees with some core LW teachings (e.g. all religious SSC readers presumably disagree with LW taking atheism as the obviously rational conclusion).
So it seems like he is in a perfect position to brand himself as something that means “kinda like the rationalists, only better”. Why didn’t this happen? First, because Scott is not interested in doing this. Second, because Scott writes about the rationalist community in a way that doesn’t even allow his fans (e.g. the large part that disagrees with LW) to do this for him. Scott is loyal to the rationalist project and community.
If we agree that this is what makes Scott a non-post-rationalist, despite all the similarities with them, than it provides some information about what being a post-rationalist means. (Essentially, what you wrote in the article.)
this may be because they think online schooling or homework will be an adequate substitute for in-person schooling
Then we have another curious fact: that we needed the corona virus to notice that schools can be replaced by a much cheaper alternative.
(I mean, previously people have tried to start new online schools, but as far as I know, they didn’t try to replace the existing schools with online schools. But now we see it as a realistic option.)
Given that, if I propose an intervention like making homemade masks from fabric which reduced handwashing compliance by 1% (perhaps due to distracting people or making them think handwashing is less critical,) it would need to be astonishingly effective to be net positive. And most such approaches being discussed are, as far as I can tell, nowhere near that level of effectiveness.
This argument depends a lot on the correctness of your model. How do you know which proposals reduce handwashing compliance by 1%? Without numbers, it becomes a fully general argument against doing or even debating anything (other than washing your hands).
Sorry, no specific number here, but my reasoning would approximately go like this:
First, how long can I afford to be self-quarantined? Buying food is not the main problem for me (I can cook, rice is cheap, I have enough money to hypothetically buy enough rice for a year); the limiting factor is how much vacation can I get, and whether I want to burn it all now. Even assuming that coronavirus is the highest priority, I suspect there may be two major waves of infection: one now, and one during the autumn. (Quitting is not an option; then I would have to pay my health insurance and would run out of money much faster.)
Second, assume exponential growth, until almost everyone is sick. Now you can estimate the peak, and time your self-quarantine so that it is around that peak.
The problem is, the noise in estimation of the peak is probably greater than my vacation time, so I can’t really do this in practice. Oops.
The second best option is partial self-quarantine, that is reducing exposure to the minimum level I can keep for a few months. Try to work remotely whenever possible, never eat outside your home, reduce social activities to minimum. When? Well, I already started this week—on Monday I asked my boss to let me work from home, started cooking every day, took my child out of kindergarten, and cancelled a birthday party this weekend. Seemed a bit paranoid… and then on Friday we got the first confirmed coronavirus case in my country.
When there are sufficient supplies of things like food, like now, and people start hoarding, shortages become a self-fulfilling prophecy.
Would it make sense to encourage the panic to start too soon? First the customers would cause a shortage, then the producers would increase their production in hope for easy profit, then the shortage would end with everyone having enough stuff at home… and then the actual need would come.
More simply, if people are going to empty the shops eventually, I prefer if they do it one month before the actual crisis rather than one week before it. Because during one month, the market may fix the shortage, but one week is not enough time to do much.
Given how inexpensive and useful it is to do this, why do so few people it?
Because there are so many possible topics, that even if each of them takes relatively little time, together they would take a lot?
For example, in your example, you mentioned ” an obscure country” and “a particular era”, and also a focus on politics and military (as opposed to science, or art, or sport). Okay, maybe you can do it in a week, or in an afternoon. But why that country, and why that era? How much it would cost to get a comparable knowledge of all countries and, uhm, let’s say the entire 20th century?