Agree with everything, including the crucial conclusion that thinking and writing about utility maximisation is counterproductive.
Just one minor thing that I disagree with in this post: while simulators as a mathematical abstraction are not agents, the physical systems that are simulators in our world, e. g. LLMs, are agents.
An attempt to answer the question in the title of this post, although that could be a rhetorical one:
This could be a sort of epistemic and rhetorical inertia, specifically due to this infamous example of a paperclip maximiser. For a similar reason, a lot of people are still discussing and are mesmerised by the “Chinese Room” argument.
The historic focus of LW is on decision theories and the discipline of rationality, where EU maximiser is the model agent. Then this concept was carried over to AI alignment discussions and was applied to model the future superhuman AI without careful consideration, or in lieu of the better models of agency (at least at the time when this has happened). Then, again: epistemic and didactic inertia: a lot of “foundational AI x-risk/alignment texts” still mention utility maximization, new people who enter the field still get exposed to this concept early and think and write about it perhaps before finding concepts and questions more relevant to the actual reality to think and write about, etc.
Agree with everything, including the crucial conclusion that thinking and writing about utility maximisation is counterproductive.
Just one minor thing that I disagree with in this post: while simulators as a mathematical abstraction are not agents, the physical systems that are simulators in our world, e. g. LLMs, are agents.
An attempt to answer the question in the title of this post, although that could be a rhetorical one:
This could be a sort of epistemic and rhetorical inertia, specifically due to this infamous example of a paperclip maximiser. For a similar reason, a lot of people are still discussing and are mesmerised by the “Chinese Room” argument.
The historic focus of LW is on decision theories and the discipline of rationality, where EU maximiser is the model agent. Then this concept was carried over to AI alignment discussions and was applied to model the future superhuman AI without careful consideration, or in lieu of the better models of agency (at least at the time when this has happened). Then, again: epistemic and didactic inertia: a lot of “foundational AI x-risk/alignment texts” still mention utility maximization, new people who enter the field still get exposed to this concept early and think and write about it perhaps before finding concepts and questions more relevant to the actual reality to think and write about, etc.