It doesn’t seem credible for AIs to be more aligned with researchers than researchers are aligned with each other, or with the general population.
Maybe that’s ‘gloomy’ but thats no different than how human affairs have progressed since the first tribes were established. From the viewpoint of broader society it’s more of positive development to understand there’s an upper limit for how much alignment efforts can expect to yield. So that resources are allocated properly to their most beneficial usage.
It doesn’t seem credible for AIs to be more aligned with researchers than researchers are aligned with each other, or with the general population.
Maybe that’s ‘gloomy’ but thats no different than how human affairs have progressed since the first tribes were established. From the viewpoint of broader society it’s more of positive development to understand there’s an upper limit for how much alignment efforts can expect to yield. So that resources are allocated properly to their most beneficial usage.