I’d guess it’s a classic bias-variance tradeoff. Rolling novel causal models is high variance while outside view considerations can be biased in ways you are blind to but can be good enough for coarse analysis when you just need to get the sign right.
Might be worthwhile to note that this strongly tilts towards the inside view and a suggestion for a strong counterpoint (statistical analysis of major trends that potentially gave rise to various viewpoints here).
Read as many such critiques as possible, take notes, and do iterated compression/summarization of the notes. This way you’ll build your own toolbox of heuristics for evaluation that you deeply understand rather than aping the experts without really understanding.
Another reason not to integrate is that integration is actually just bad in some circumstances. You don’t want all your heuristics to propagate to all possible domains all at once since they wouldn’t be applicable and too many options would likely make your decision making capabilities worse. Some kinds of drug experiences demonstrate this.
I have to trade off the cost of following high complexity decision theory against the risk of being dominated*the badness of being dominated.
Great to see these points being made to a broader audience. My take from a similar investigation into science funding is that there is a common pattern to these really high impact researchers that have trouble getting funding: they’re often doing methods innovation rather than object level progress on some area.* It’s really hard to get grantors to understand the potential value of methods research even though it underlies scientific advancement. Big shots like the aforementioned Nobel winner, Douglas Englebart, and many others push for direct methods research only to have it seemingly fall on deaf ears even given their past accomplishments. I think part of the reason is that the benefits to major methods breakthroughs are basically unbelievable from the perspective of normal scientific work, and that people’s ability to think coherently about hits based research isn’t great. If we want breakthroughs the world desperately needs a billionaire who understands the value of methods work. I was really hopeful for Moskovitz to be this person given his blog posts around Asana and solving the meta problem, but have been disappointed by OpenPhil seeming to move in the direction of other foundations in terms of the range of grants they give out. What I mean by that is that glancing through their grants list, you could transplant most of them to the grants list from other foundations and no one would bat an eyelid. Thankfully there are a few exceptions, and people in methods have to take any concessions they get. The Templeton Foundation is another grantor in this space that at least has tried a little bit.
*Yes, there are arguments to be made about whether methods work is better thought of as something that could be pursued as it’s own thing vs something that must generally arise out of object level work. And I’d be thrilled if that argument *was actually happening*.
(QRI is working on the consciousness meter btw ;)
Regulatory capture, in practice, means that if you circumvent the existing players they can have you arrested. Many many people are trying to figure out how to supply insulin to diabetics in the US, but no dice so far.
One of the reasons feedback feels unpleasant is when it fails to engage with what actually interests you about the area. When you receive such feedback, there will then be the feeling of needing to respond for the sake of bystanders who might otherwise assume that there aren’t good responses to the feedback.
Keep in mind doctors are optimizing for patients of average ability wrt not acting insanely on their instructions. I found a lot more sympathy for people in positions of authority when I gained experience with the breath taking number of ways people can alter what seem to be very simple instructions.
If it were in person the nurse may even have smiled at him.
Ah, mimicking of the post-rigor state, and that being sufficient to get points in interactions with the pre-rigorous is what is babbly about babblers.
I think the hard reification of villagers and werewolves winds up stopping curiosity at the wrong places in the abstraction stack. Seeing agents as following mixed strategies determined by local incentives which tend to be set by super-cooperators and super-defectors seems better to me. It’s also a much more tractable problem and matches what I see on the ground in orgs.
That sounds equivalent to kelly criterion, that most of your bankroll is in a low variance strategy and some proportion of your bankroll is spread across strategies with varying amounts of higher variance. Is there any existing work on kelly optimization over distributions rather than points?
edit: full kelly allows you to get up to 6 outcomes before you’re in 5th degree polynomial land which is no fun. So I guess you need to choose your points well. http://www.elem.com/~btilly/kelly-criterion/
It seems like at the end of a fairly complicated construction process that if you wind up with a model that outperforms, your prior should be that you managed to sneak in overfitting without realizing it rather than that you actually have an edge right? Even if, say, you wound up with something that seemed safe because it had low variance in the short run, you’d suspect that you had managed to push the variance out into the tails. How would you determine how much testing would be needed before you were confident placing bets of appreciable size? I’m guessing there’s stuff related to structuring your stop losses here I don’t know about.
agree, in this situation he should state that he feels incentivized to state 70% and that that’s a problem.
I don’t like reifying this as dishonesty when the outside view on taking ideas seriously says that it’s pretty reasonable to update slowly as you gather more kinds of evidence than just logical argument.
This suggest to me that it’s a good idea to power boost people who are in the upper echelons of competence in any given domain, but to be careful to not power boost them enough that they exit the domain they are currently in and try to play in a new larger one where they are of more average competence. Sort of an anti peter principle. At least if the domain is important. For unimportant domains you probably do want to skim the competent people out and get them playing in a more important domain.
unpaid internet arguing, without the reward of seeing a change positively impact someone’s life. The selection effect means you wind up interacting mostly with those who want to argue rather than collaborate.
noticing what candy crush is doing.