In contrast to my point on ems, I do think we should avoid building AIs whose main purpose is to be equivalent to (or exceed) humans in “moral value”/pursue anything that resembles building “AI successors”. Imo the main purpose of AI alignment should be to ensure AIs help us thrive and achieve our goals rather than to attempt to embed our “values” into AIs with the goal of promoting our “values” independently of our existence. (Values is in scare quotes because I don’t think there’s such a thing as human values—individuals differ a lot in their values, goals, and preferences.)
In contrast to my point on ems, I do think we should avoid building AIs whose main purpose is to be equivalent to (or exceed) humans in “moral value”/pursue anything that resembles building “AI successors”. Imo the main purpose of AI alignment should be to ensure AIs help us thrive and achieve our goals rather than to attempt to embed our “values” into AIs with the goal of promoting our “values” independently of our existence. (Values is in scare quotes because I don’t think there’s such a thing as human values—individuals differ a lot in their values, goals, and preferences.)