This is awesome, thanks.
In case it’s of interest to anyone, I recently wrote down some short, explicit models of the costs of remote teams (I did not try to write the benefits). Here’s what I wrote:
Substantially increases activation costs of collaboration, leading to highly split focus of staff
Substantially increases costs of creating common knowledge (especially in political situations)
Substantially increases barriers to building trust (in-person interaction is key for interpersonal trust)
Substantially decreases communication bandwidth—both rate and quality of feedback—increasing the cost of subtle, fine-grained and specific positive feedback harder, and making strong negative feedback on bad decisions much easier, leading to risk-aversion.
Substantially increases cost of transmitting potentially embarrassing information, and incentivises covering up of low productivity, as it’s very hard for a manager to see the day-to-day and week-to-week output.
Let me check I’m following with some simple claims:
If it’s common knowledge that we’re in world 3, then we’re in world 4.
If it’s common knowledge that we’re in world 2, then we’re in world 4.
The key value to being in world n+1 is that you can outplay all the people in world n.
To move back from world 2 into world 1, one can punish inaccurate job titles
To move back from world 3 into world 2, one can punish not treating workers according to their titles
You can’t move back from world 4 to world 3.
People seem to treat it in a fatalistic way, like they’ve been told what their score will be at the end of the game, as opposed to one of their base stats (like finding out how tall you are). I tested myself on the big 5 lately and finding out I have a fairly extreme baseline on things like neuroticism and intellect been surprisingly valuable for understanding myself.
I understand there are also subcategories of IQ, and I am interested to know if there’s an IQ test I can take which gives me info on a variety of the more robust components of IQ (whatever they are, I don’t actually know). I could imagine this giving me advice of the type “In general try strongly to use verbal reasoning over spacial reasoning, and if you’re in a situation where spacial reasoning is necessary, make a conscious plan to put in more deliberate practice than seems necessary for the median similarly smart person around you who is learning the same skill.” If I expected to get 3+ big recommendations like that I think I’d be quite excited to pay for a test.
I think that having IQ tie more closer to your decisions might help people understand it than if it’s just an abstract immutable number that says you’re worse than these other people, and having it be multifactorial could be a way to help there?
But unless I can actually tie it to some decisions, I do expect finding out my IQ to make me depressed on net. Perhaps I can just use it to figure out whether or not it’s on the table for me to do math at MIRI, though my sense is that philosophical sophistication is much more the bottleneck there.
I am especially interested to hear about any strong positive/negatives from mobile and tablet users.
Yeah, I think CFAR has been heavy tailed, and I would predict that there are some individuals for whom it has counterfactually caused them to solve big problems like this.
People do assign a fair amount of status based on attractiveness of faces, and I think it’s good on the margin to not introduce that class of bias to the discussion. My current guess is that the costs aren’t commensurate with the benefits of faster recognition.
I feel like I learned something very important about my mind—you’re right, if I skim these low-level-pattern-matched paragraphs, they read as basically fine to me. Has plausibly quite important implications for AI too. So I’ve curated this post.
Reminder to do this.
(I will stop reminding you if you ask, but until then I am a fan of helping public commitment get acted on.)
Yes, the new reading is “Politics isn’t a good place to practice rationality unless all the discussants are already rational”. Not that you shouldn’t engage in discussion of politics, just that you shouldn’t go to train rationality there (when not already well practiced in other areas).
I already told Buck that I loved this post. For this curation notice, let me be specific about why.
Posts from people who think carefully and seriously about difficult questions writing about some of the big ways they changed their mind over time are rare and valuable (other examples: Holden, Eliezer, Kahneman).
OP is unusually transparent, in a way that leads me to feel I can actually update on the data rather than holding it in an internal sandbox. In feel it has not been as adversarially selected as most other writings by someone about themselves, making it extremely valuable data. (Where data is normally covered up, even small amounts of true data are often very surprising.)
I find the specific update quite useful, including all of the examples. It fits together with Eliezer’s claim (at the end of section 5 here) that you can figure out which experts are right/wrong far more often than you can come up with the correct theory yourself.
Note: even with you making a point of it, it took me two reads to understand why my initial read of “unless all the discussants are already rational” was wrong.
Agreed. I realise the OP could be misread; I’ve updated the first paragraph with an extra sentence mentioning that summarising and translating existing work/literature in related domains is also really helpful.
Thanks for the pointers to network science Jan, I don’t know this literature, and if it’s useful here then I’m glad you understand it well enough to guide us (and others) to key parts of it. I don’t see yet how to apply it to thinking quantitatively about scientific and forecasting communities.
If you (or another LWer) thinks that the theory around universality classes is applicable in thinking about how to ensure good info propagation in e.g. a scientific community, and you’re right, then I (and Jacob and likely many others) would love to read a summary, posted here as an answer. Might you explain how understanding the linked paper on universality classes has helped you think about info propagation in forecasting communities / related communities? Concrete heuristics would be especially interesting
(Note that Jacob and I have not taken a math course in topology or graph theory and won’t be able to read answers that assume such, though we’ve both studied formal fields of study and could likely pick it up quickly if it seemed practically useful.)
In general we’re not looking for *novel* contributions. To give an extreme example, if one person translates an existing theoretical literature into a fully fleshed out theory of info-cascades for scientific and forecasting communities, we’ll give them the entire prize pot.
This is a very carefully reasoned and detailed post, which lays out a clear framework for thinking about approaches to alignment, and I’m especially excited because it points to one quadrant—engineering-focused research without human models—as highly neglected. For these three reasons I’ve curated the post.