An AI that can be aligned to preferences of even just one person is already an aligned AI, and we have no idea how to do that.
An AI that’s able to ~perfectly simulate what a person would feel would not necessarily want to perform actions that would make the person feel good. Humans are somewhat likely to do that because we have actual (not simulated) empathy, that makes us feel bad when someone close feels bad, and the AI is unlikely to have that. We even have humans that act like that (i.e. sociopaths), and they are still humans, not AIs!
...from Venus, and only animals left on Earth, so one more planet than we had before.