Cognitive Bias of AI Researchers?

I find it inconvenient that many AI discussions circle around “agents”, “environments” and “goals”. These are non-mathematical words, and by using this vocabulary we are anthropomorphizng natural world’s phenomena.

While an “agent” may well be a set of interacting processes, that produce emergent phenomena (like volition, cognition, action), it is not a fundamental and pragmatic mathematical concept. The truly fundamental pragmatic mathematical concepts may be:

(1) states: a set of possible world states (each being a sets of conditions).
(2) processes: a set of world processes progressing towards some of those states.

If so, how could we avoid that anthropomorphic cognitive bias in our papers, and discussions?

Would the (1), (2) terms be a good alternative for our discussions, and to express the ideas in most AI research papers? E.g., Bob is a process, and Alice is processes,… and they collectively are progressing towards some desired state convergent state, defined by process addition.

What fundamental concepts would you consider to be a better alternative to talk formally about the AI domain?