You are the great commander of many robotic soldiers. Each soldier has two robotic kidneys. In war time they frequently need another one. If they don’t, they break (100%). If they do, the donor might break (1%).
If all soldiers were equally good at war and equally willing to give one kidney, there would be little to discuss. But war is not that simple.
In your army 1⁄10 are 2 times better than the median, 1⁄100 are 4 times better than the median, 1/1000 are 16 times better than the median, 1/10000 are 256 times better than the median, etc.
If « better » would only meant « better at war », there would be little to discuss. But systematic winning is not that simple.
In your army better also means « more likely to give a kidney » and « more likely to set an example for the others to follow ». Which means that, conditional on this robotic soldier wants to give a robotic kidney, it’s also more likely to be critical to war effort and more likely to set an example that the less systematic winners will follow. Oh well.
At this point, my brain want to retreat to heuristics like « Let’s assume Scott already compute that », but that sounds sloppy. What’s your utilitarian model?