Prices are maybe the default mechanism for status allocations?
Do you mean prizes? This is pretty compelling in some ways; I think it’s plausible that AI safety people will win at least as many prizes as nuclear disarmament people, if we’re as impactful as I hope we are. I’m less sure whether prizes will come with resources or if I will care about the kind of status they confer.
I also feel weird about prizes because many of them seem to have a different purpose from retroactively assigning status fairly based on achievement. Like, some people would describe them as retroactive status incentives, but others would say they’re about celebrating accomplishments, others would say they’re a forward-looking field-shaping signal, etc.
The workplace example feels different to me because workers can just reason that employers will keep their promises or lose reputation, so it’s not truly one-time. That would be analogous to the world where the US government had already announced it would award galaxies to people for creating lots of impact. I’m also not as confident as you in decision theory applying cleanly to financial compensation. E.g. maybe it creates unavoidable perverse incentives somehow.
If you think my future prizes will total at least ~1% of the impact I create I’d be happy to make a bet and sell you shares of these potential future prizes. It seems not totally well-defined but better than impact equity. I’m worried however that this kind of transaction won’t clear due to the enormous op cost of dollars right now.
The workplace example feels different to me because workers can just reason that employers will keep their promises or lose reputation, so it’s not truly one-time.
Agree there are not one-time dynamics, but I would bet that the vast majority of people would have strong intuitions that they shouldn’t defect on the last round of the game (in general, if people truly adopted decision-theories that thought defecting on single-shot games was rational, then all definitive finite games would also end up in defect-defect equilibria via induction, which clearly doesn’t happen).
I’d argue people have norms that they shouldn’t defect on the last round of the game because being trustworthy is useful. This doesn’t generalize to taking whatever actions our monkey-brained approximation of LDT implies we should do according to our monkey-brained judgement of what logical correlations they create.
Do you mean prizes? This is pretty compelling in some ways; I think it’s plausible that AI safety people will win at least as many prizes as nuclear disarmament people, if we’re as impactful as I hope we are. I’m less sure whether prizes will come with resources or if I will care about the kind of status they confer.
I also feel weird about prizes because many of them seem to have a different purpose from retroactively assigning status fairly based on achievement. Like, some people would describe them as retroactive status incentives, but others would say they’re about celebrating accomplishments, others would say they’re a forward-looking field-shaping signal, etc.
The workplace example feels different to me because workers can just reason that employers will keep their promises or lose reputation, so it’s not truly one-time. That would be analogous to the world where the US government had already announced it would award galaxies to people for creating lots of impact. I’m also not as confident as you in decision theory applying cleanly to financial compensation. E.g. maybe it creates unavoidable perverse incentives somehow.
If you think my future prizes will total at least ~1% of the impact I create I’d be happy to make a bet and sell you shares of these potential future prizes. It seems not totally well-defined but better than impact equity. I’m worried however that this kind of transaction won’t clear due to the enormous op cost of dollars right now.
Yep, sorry, prizes!
Agree there are not one-time dynamics, but I would bet that the vast majority of people would have strong intuitions that they shouldn’t defect on the last round of the game (in general, if people truly adopted decision-theories that thought defecting on single-shot games was rational, then all definitive finite games would also end up in defect-defect equilibria via induction, which clearly doesn’t happen).
I’d argue people have norms that they shouldn’t defect on the last round of the game because being trustworthy is useful. This doesn’t generalize to taking whatever actions our monkey-brained approximation of LDT implies we should do according to our monkey-brained judgement of what logical correlations they create.