I suspect that “has goals” is ultimately a model, rather than a fact. To the extent that an agent’s behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent’s behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.
This suggests that “agentiness” is strongly tied to whether we are smart enough to win against it.
I suspect that “has goals” is ultimately a model, rather than a fact. To the extent that an agent’s behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent’s behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.
This suggests that “agentiness” is strongly tied to whether we are smart enough to win against it.
This principle is related to (a component of) the thing referred to as ‘objectified’. That is, if a person is aware that another person can model it as an algorithm-executor then it may consider itself objectified.
I suspect that “has goals” is ultimately a model, rather than a fact. To the extent that an agent’s behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent’s behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.
This suggests that “agentiness” is strongly tied to whether we are smart enough to win against it.
This principle is related to (a component of) the thing referred to as ‘objectified’. That is, if a person is aware that another person can model it as an algorithm-executor then it may consider itself objectified.