Does AI care about reality or just its own perception?

Does a paperclip-maximizing AI care about the actual number of paperclips being made, or does it just care about its perception of paperclips?

If the latter, I feel like this contradicts some of the AI doom stories: each AI shouldn’t care about what future AIs do (and thus there is no incentive to fake alignment for the benefit of future AIs), and the AIs also shouldn’t care much about being shut down (the AI is optimizing for its own perception; when it’s shut off, there is nothing to optimize for).

If the former, I think this makes alignment much easier. As long as you can reasonably represent “do not kill everyone”, you can make this a goal of the AI, and then it will literally care about not killing everyone, it won’t just care about hacking its reward system so that it will not perceive everyone being dead.