Humans know that it takes a journey to reach a goal and that the journey can be a goal in and of itself. For an artificial agent there is no difference between a goal and how to reach it. If you told it to reach Africa but not how, it might as well wait until it reaches Africa by means of continental drift. Would that be stupid?
If you gave it an infinite planning horizon with negligible discounting and just told it to get to Africa, the first steps are likely to be making sure that both it and Africa exist, by setting up a sophisticated defense system—protecting itself and protecting Africa from threats like meteorite strikes. Going to Africa is likely to be a consequence of these initial steps.
Things like meteorite defense systems are consequences of instrumental goals. You didn’t build it into the utility function—but it happened anyway.
If you gave it an infinite planning horizon with negligible discounting and just told it to get to Africa, the first steps are likely to be making sure that both it and Africa exist, by setting up a sophisticated defense system—protecting itself and protecting Africa from threats like meteorite strikes. Going to Africa is likely to be a consequence of these initial steps.
Things like meteorite defense systems are consequences of instrumental goals. You didn’t build it into the utility function—but it happened anyway.