A poor but certain attempt to philosophically undermine the orthogonality of intelligence and aims

I have this idea that is not a contender with the serious formal reasoning that people who know a hell of a lot about AI are able to do, but nonetheless I think could be useful for them to hear. The idea is that for a mind (in a broad sense of the word) to have any aim it must simultaneously aim to preserve itself long enough to undertake the actions that serve that aim, and that following from this is an inbuilt cooperative foundation for all minds.

So to try to concretize this I would say that, for example, a human being is preserving their own being from one moment to the next, and that each of these moments could be viewed in objective reality as “empty individuals” or completed physical things. Whatever the fundamental physical reality of a moment of experience I’m suggesting that that reality changes as little as it can. Because of this human beings are really just keeping track of themselves as models of objective reality, and their ultimate aim is in fact to know and embody the entirety of objective reality (not that any of them will succeed). This sort of thinking becomes a next to nothing, but not quite nothing, requirement for any mind, regardless of how vastly removed from another mind it is, to have altruistic concern for any other mind in the absolute longest term (because their fully ultimate aim would have to be the exact same).