In the sense that people are contorting their opinions too much to make them palatable to outsiders, or that people within the AI safety community itself end up trying to pursue research that looks good to their peers instead of what does the most marginal good (ie doing increasingly elaborate research on the specifics of X risk instead of the boring grunt work of lobbying in D.C)
Prestige Maxing is Killing the AI Safety Field
In the sense that people are contorting their opinions too much to make them palatable to outsiders, or that people within the AI safety community itself end up trying to pursue research that looks good to their peers instead of what does the most marginal good (ie doing increasingly elaborate research on the specifics of X risk instead of the boring grunt work of lobbying in D.C)