This isn’t really relevant to the point you’re making, but I think your example may not fit your math, and actually would give the opposite conclusion.
What you often want in grad students is a high variance. That favors equally ranked candidates with less information on their performance. So if you think prestige is bs, you’d do way better to grab underappreciated normies.
You can’t have too many of your students failing out of the program. But it’s also important to have a few students that actually go into the academic field, and by doing so write lots of papers. Those go on your academic achievements. Since there are few spots available in academia, there’s some nonlinearity at the point a student actually stays in that field. Let’s say maybe 80th percentile of grad student writes 2 published papers while they’re in grad school and gets a job in industry. 90th percentile writes 4 published papers in grad school, because they’re aiming for academia so the incentive is much larger. Then they get a postdoc and a job and write another ten papers in the next decade (probably more, they’re in publish or perish mode for that first time and getting better at publication.
Hitting a 90th percentile student is worth way more than an 80th.
High variance will also give you more flunkers. Which your current job won’t like a lot, which counterbalances the effect. But when you negotiate for a new job, those are unlikely to be noticed unless it’s extreme.
One point you might draw from this is to be careful when applying probability theory to decision making.
Nice point! I think I’d say where the critique bites is in the assumption that you’re trying to maximize the expectation of q_i. We could care about the variance as well, but once we start listing the things we care about—chance of publishing many papers, chance of going into academia, etc—then it looks like we can rephrase it as a more-complicated expectation-maximizing problem. Let U be the utility function capturing the balance of these other desired traits; it seems like the selectors might just try to maximize E(U_i).
Of course, that’s abstract enough that it’s a bit hard to say what it’ll look like. But whenever is an expectation-maximizing game the same dynamics will apply: those with more uncertain signals will stay closer to your prior estimates. So I think the same dynamics might emerge. But I’m not totally sure (and it’ll no doubt depend on how exactly we incorporate the other parameters), so your point is well-taken! Will think about this. Thanks!
This isn’t really relevant to the point you’re making, but I think your example may not fit your math, and actually would give the opposite conclusion.
What you often want in grad students is a high variance. That favors equally ranked candidates with less information on their performance. So if you think prestige is bs, you’d do way better to grab underappreciated normies.
You can’t have too many of your students failing out of the program. But it’s also important to have a few students that actually go into the academic field, and by doing so write lots of papers. Those go on your academic achievements. Since there are few spots available in academia, there’s some nonlinearity at the point a student actually stays in that field. Let’s say maybe 80th percentile of grad student writes 2 published papers while they’re in grad school and gets a job in industry. 90th percentile writes 4 published papers in grad school, because they’re aiming for academia so the incentive is much larger. Then they get a postdoc and a job and write another ten papers in the next decade (probably more, they’re in publish or perish mode for that first time and getting better at publication.
Hitting a 90th percentile student is worth way more than an 80th.
High variance will also give you more flunkers. Which your current job won’t like a lot, which counterbalances the effect. But when you negotiate for a new job, those are unlikely to be noticed unless it’s extreme.
One point you might draw from this is to be careful when applying probability theory to decision making.
Nice point! I think I’d say where the critique bites is in the assumption that you’re trying to maximize the expectation of q_i. We could care about the variance as well, but once we start listing the things we care about—chance of publishing many papers, chance of going into academia, etc—then it looks like we can rephrase it as a more-complicated expectation-maximizing problem. Let U be the utility function capturing the balance of these other desired traits; it seems like the selectors might just try to maximize E(U_i).
Of course, that’s abstract enough that it’s a bit hard to say what it’ll look like. But whenever is an expectation-maximizing game the same dynamics will apply: those with more uncertain signals will stay closer to your prior estimates. So I think the same dynamics might emerge. But I’m not totally sure (and it’ll no doubt depend on how exactly we incorporate the other parameters), so your point is well-taken! Will think about this. Thanks!