Humans care about an awful lot of different things, even just one human!
While I think this is a problem, I do not think it is the central problem of alignment. Getting an AI to care about anything in particular in the real world not directly tied to its inputs (so no wireheading) is an unsolved problem. I expect that if we figured out how to make it care about maximizing water molecules, we’d be most of the way to solving alignment.
XelaP comments on Explained Simply: Quantilizers