I love that you wrote this. I think LW needs more instrumental advice like this.
In this particular instance I think you’re mixing the idea of dressing cool and dressing conservative.
These are not the same thing, although both are better than dressing badly.
It actually does come up frequently with companies that do lots of high liability projects. Movie studios for instance will create companies for each production to limit their risk profile if something goes wrong.
This is a cool idea. A fun test of this might be to create a few markets for existing mega-projects on an open prediction market like Augur, and see if you can drum up any interest for people actually investing in their outcomes.
There’s a whole subfield on “scoring rules”, which try to more exactly measure people’s calibration and resolution.
There’s scoring rules that incorporate priors, scoring rules that incorporate information value to the question asker, and scoring rules that incorporate sensitivity to distance (if you’re close to the answer, you get more points). There’s a class of “strictly proper” scoring rules that incentivize people to give their true probability. I did a deep dive into scoring rules when writing the Verity whitepaper. Here are some of the more interesting/useful research articles on scoring rules:
Order-Sensitivity and Equivariance of Scoring Functions—PDF—arxiv.org: https://www.evernote.com/l/AAhfW6RTrudA9oTFtd-vY7lRj0QlGTNp4bI/
Tailored Scoring Rules for Probabilities: https://www.evernote.com/l/AAhVczys0ddF3qbfGk_s4KLweJm0kUloG7k/
Scoring Rules, Generalized Entropy, and Utility Maximization: https://www.evernote.com/l/AAh2qdmMLUxA97YjWXhwQLnm0Ro72RuJvcc/
The Wisdom of Competitive Crowds: https://www.evernote.com/l/AAhPz9MMSOJMcK5wrr8mQGNQtSOvEeKbdzc/
A formula for incorporating weights into scoring rules: https://www.evernote.com/l/AAgWghOuiUtIe76PQsXwFSPKxGv-VkzH7l8/
Sensitivity to Distance and Baseline Distributions in Forecast Evaluation: https://www.evernote.com/l/AAg7aZg9BjRDLYQ2vpGow-qqN9Q5XY-hvqE/
This may be because of my particular learning style. I tend to get most of my deep learning from the actual application of the skill, which is based on the how resource. I use the what resource in a very surface way, just getting particular facts or techniques when I’m stuck. However, I agree that What books tend to cover material in a deeper way
Something about the way you wrote this made me instantly like you.
As a counter to this, I got very very far with this sort of self-improvement for a very long time (though I think LW was very bad at teaching it, and I mostly got it from other sources.) I’ve recently focused on the alignment based models as I was starting to get to the point of diminishing returns with the other way, but I did get a lot out of the previous paradigm
I think the alignment based models are very very powerful, and I also think that the overriding the elephant models are quite powerful and get too much of a bad rap.
Thanks! This looks really useful.
It sounds like your naming process is actually focusing. For me, the names don’t matter as much, and I just have a conversation involving focusing to figure out what the parts want.
Maybe, or maybe there’s a different context entirely. As Said says, there really wasn’t much context to this at all.
I was going to make this same comment. Without context, seems like a lot of fixing something that ain’t broke.
I have a model that there’s something like a Pareto distribution where 20% of the people in a field contribute to 80% of the Actually Important advances, and of those advances about 80% of those people are a further 20% split of people who are deliberately and strategically choosing fields such that they can rationally expect to make advances. This implies that for instance in climate change, there will be ~4% of people who have actually done a fermi estimate of their impact on climate change that will contribute ~64% of the relevant advances in the field.
One thing you can say is that this is awful, and you really would like to have a field without this ridiculous distribution, and try to tell people to really wait to go into this field so they can contribute to Actually Important things. But it seems like there’s a lot of countervailing forces preventing this, including the status incentive of saying “this is a field only for people who work on “Actually Important things.” If your timelines are really short, you might not be worried about this, but it does seem like something to worry about over a decade or so of putting this message out in a specific field.
The other way to handle it would be to expect the Pareto distribution to happen because most people just aren’t strategic about their careers, and do rationalize. The goal in that case is to just try and grow the field as much as possible, and know that some small percentage of the people who go into the field will be strategic thinkers who will contribute quite a bit. Not only does this strategy seem to actually capture the pattern of fields that have grown and made significant advances in solving problems, but it also has the benefit of getting the additional ~36% of Actually Important advances that come from people who aren’t strategically trying to create impact.
The announcement post for RAISE was specifically removed from the front page, with Oliver stating the reason was that there was an explicit LW policy to not allow organization announcements on the front page. Can we perhaps get some clarity on this policy?
I looked over your posts and I like them. If the question in your title were personally directed at me, my answer would be no.
See also: Please Don’t Fight The Hypothetical, plus the excellent comment from David Gerard that explains why people exhibit this behavior, and why explaining to them really nicely that this will help them learn might be seen as disingenuous.
I think one thing this post fails to take into account is the difference between endorsed, professed, conscious beliefs, vs. unconscious aliefs. I suspect the “morals as a convenience” theory is actually talking about the latter type of belief, while the “factual advocacy” approach is more focused on the former.
While it is true that factual advocacy can affect unconscious aliefs, there are much more effective ways to do so, many pioneered and testing in the field of marketing, which in many ways can be seen as a study of how to effect people’s aliefs such that they change their actions.
I think the central question that Duncan is getting at in the article is where the line should be. Society is putting it more towards micro, Duncan thinks it’s swung to far and wants to be towards macro. But it’s clear that just saying “have a line” doesn’t help with the dilemma very much (unless people don’t have personal boundaries, in which case saying “Have a line” is definitely helpful advice).