From another point of view: some philosophers are convinced that caring about conscious experiences is the rational thing to do. If it’s possible to write an algorithm that works in a similar way to how their mind works, we already have an (imperfect, biased, etc.) agent that is somewhat aligned, and is likely to stay aligned after further reflection.
I think this is an interesting point—but I don’t conclude optimism from it as you do. Humans engage in explicit reasoning about what they should do, and they theorize and systematize, and some of them really enjoy doing this and become philosophers so they can do it a lot, and some of them conclude things like “The thing to do is maximize total happiness” or “You can do whatever you want, subject to the constraint that you obey the categorical imperative” or as you say “everyone should care about conscious experiences.”
The problem is that every single one of those theories developed so far has either been (1) catastrophically wrong, (2) too vague, or (3) relative to the speaker’s intuitions somehow (e.g. intuitionism).
By “catastrophically wrong” I mean that if an AI with control of the whole world actually followed through on the theory, they would kill everyone or do something similarly bad. (See e.g. classical utilitarianism as the classic example of this).
Basically… I think you are totally right that some of our early AI systems will do philosophy and come to all sorts of interesting conclusions, but I don’t expect them to be the correct conclusions. (My metaethical views may be lurking in the background here, driving my intuitions about this… see Eliezer’s comment)
Do you have an account of how philosophical reasoning in general, or about morality in particular, is truth-tracking? Can we ensure that the AIs we build reason in a truth-tracking way? If truth isn’t the right concept for thinking about morality, and instead we need to think about e.g. “human values” or “my values,” then this is basically a version of the alignment problem.