Nice post, guess I agree. I think it’s even worse though: not only do at least some alignment researchers follow their own philosophy which is not universally accepted, it’s also a particularly niche philosophy, and one that potentially leads to human extinction itself.
The philosophy in question is of course longtermism. Longtermism holds two controversial assumptions:
Symmetric population ethics: we have to create as much happy conscious life as possible. It’s not just about making people happy, it’s also about making happy people. In philosophy, and outside philosophy, most people think this is bonkers (I’m one of them).
Conscious AIs are morally relevant beings.
These two assumptions together lead to the conclusion that we must max out on creating conscious AIs, and that if these AIs end up in a resource conflict with humans (over e.g. energy, space, or matter), the AIs should be prioritized, since they can deliver most happiness per J, m^3 or kg. This leads to extinction of all humans.
I don’t believe in ethical facts so even an ideology as, imo, bonkers as this one is not objectively false, I believe. However, I would really like alignment researchers and their house philosophers (looking at you, MacAskill) to distance themselves from extrapolating this idea all the way to human extinction. Beyond that bare minimum, I would like alignment researchers to start accepting democratic inputs in general.
Maybe democracy is the library you were looking for?
Would be nice if those disagreeing are saying why they’re actually disagreeing