When we talk about AIs scheming, alignment faking or goal preservation, we imply there is something scheming or alignment faking or wanting to preserve its goals or escape the datacentre.
See also this previous discussion about
What is the AGI system and what is the environment? Where does the AGI system draw the boundary when reasoning about itself?
See also this previous discussion about