In the sense that people are contorting their opinions too much to make them palatable to outsiders, or that people within the AI safety community itself end up trying to pursue research that looks good to their peers instead of what does the most marginal good (ie doing increasingly elaborate research on the specifics of X risk instead of the boring grunt work of lobbying in D.C)
Those things are also bad, this was more about companies programs prioritising recruiting people who look good vs actually being likely to lead to solving alignment.
have since changed my mind that this may be happening less than i thought.
and its more a capability issue than a values problem—of orgs not bragging as much about their weird recruits who do well but dont really have impressive looking things and programs/events that don’t require prestige to take part but also are valuable, not communicating this well.
Prestige Maxing is Killing the AI Safety Field
In the sense that people are contorting their opinions too much to make them palatable to outsiders, or that people within the AI safety community itself end up trying to pursue research that looks good to their peers instead of what does the most marginal good (ie doing increasingly elaborate research on the specifics of X risk instead of the boring grunt work of lobbying in D.C)
Those things are also bad, this was more about companies programs prioritising recruiting people who look good vs actually being likely to lead to solving alignment.
have since changed my mind that this may be happening less than i thought.
and its more a capability issue than a values problem—of orgs not bragging as much about their weird recruits who do well but dont really have impressive looking things and programs/events that don’t require prestige to take part but also are valuable, not communicating this well.