[Question] Incentives affecting alignment-researcher encouragement

My hypothesis: I think the incentives for “cultivating more/​better researchers in a preparadigmatic field” lean towards “don’t discourage even less-promising researchers, because they could luck out and suddenly be good/​useful to alignment in an unexpected way”.

Analogy: This is like how investors encourage startup founders because they bet on a flock of them, not necessarily because any particular founder’s best bet is to found a startup.

If timelines are short enough that [our survival depends on [unexpectedly-good paradigms]], and [unexpectedly-good paradigms] come from [black-swan researchers], then the AI alignment field is probably (on some level, assuming some coordination/​game theory) incentivized to black-swan farm researchers.

Note: This isn’t necessarily bad (and in fact it’s probably good overall), it just puts the incentives into perspective. So individual researchers don’t feel so bad about “not making it” (where “making it” could be “getting a grant” or “getting into a program” or...)

The questions: Is this real or not? What, if anything, should anyone do, with this knowledge in hand?

No comments.