It is essentially about utility / reward functions in the brain and how naive unbounded maximisation is partially alien to biological / human needs. Many or even almost all biological needs require the target objectives to be in an optimal range—both too little and too much must be actively avoided.
If AI training (and model default assumptions / mathematics) do not reflect or optimally support these considerations then it is likely unaligned from the start.
There is still an important place for unbounded objectives, but it seems unboundedness is appropriate primarily for instrumental objectives.
Unless I misunderstand the idea of the highlighted sentence then I believe the following post is also motivated by very much same themes:
Why modelling multi-objective homeostasis is essential for AI alignment (and how it helps with AI safety as well). Subtleties and Open Challenges.
It is essentially about utility / reward functions in the brain and how naive unbounded maximisation is partially alien to biological / human needs. Many or even almost all biological needs require the target objectives to be in an optimal range—both too little and too much must be actively avoided.
If AI training (and model default assumptions / mathematics) do not reflect or optimally support these considerations then it is likely unaligned from the start.
There is still an important place for unbounded objectives, but it seems unboundedness is appropriate primarily for instrumental objectives.