[Question] Why might General Intelligences have long term goals?

Short and long term goals have different implications regarding instrumental convergence . If I have the goal of immediately taking a bite of an apple that is in my hand right now, I don’t need to gather resources or consider strategies, I can just do it. On the other hand, imagine I have an apple in my hand and I want to take a bite of it in a trillion years. I need to (define ‘me’, ‘apple’, and ‘bite’; and) secure maximum resources, to allow the apple and I to survive that long in the face of nature, competitors and entropy. Thus, I instrumentally converge to throwing everything at universal takeover—except the basic necessities crucial to my goal.

Some of the cruxes that high P(Doom) rests on are that (sufficiently) General Intelligences will (1) have goals, (2) which will be long term, (3) and thus will instrumentally converge to wanting resources, (4) which are easiest to get with humans (and other AIs they might build) out of the way, (5) so when they can get away with it they’ll do away with humans.

So if we make General Intelligences with short term goals perhaps we don’t need to fear AI apocalypse.

Assuming the first crux, why the second? That is, assuming GIs will have goals, what are the best reasons to think that such intelligences will by default have long term goals (as opposed to short term goals like “quickly give a good answer to the question I was just asked”)?