Agency is bugs and uncertainty

(Epistemic status: often discussed in bits in pieces, haven’t seen it summarized in one place anywhere.)

Do you feel that your computer sometimes has a mind of its own? “I have no idea why it is doing that!” Do you feel that, the more you understand and predict someone’s action, the less intelligent and more “mechanical” they appear?

My guess is that, in many cases, agency (as in, the capacity to act and make choices) is a manifestation of the observer’s inability to explain and predict the agent’s actions. To Omega in the Newcomb’s problem humans are just automatons without a hint of agency. To a game player some NPCs appear stupid and others smart, and the more you play and the more you can predict the NPCs, the less agenty they appear to you.

Note that randomness is not the same as uncertainty, since if you can predict that someone or something behaves randomly, it is still a prediction. What I mean is more of a Knightian uncertainty, where one fails to make a useful prediction at all. Something like a tornado may appear to intentionally go after you if you fail to predict where it will be going and you have trouble escaping.

If you are a user of a computer program, and it does not behave as you expect it to, you often get a feeling of there being a hostile intelligence opposing you, occasionally resulting in an aggressive behavior toward it, usually with verbal violence, though occasionally getting physical, the way we would confront an actual enemy. On the other hand, if you are the programmer who wrote the code in question, you think of the misbehavior as bugs, not intentional hostility, and treat the code by debugging or documenting. Mostly. Sometimes I personalize especially nasty bugs.

I was told by a nurse that this is also how they are taught to treat difficult patients: you don’t get upset at someone’s misbehavior and instead treat them not as an agent, but more like an algorithm in need of debugging. Parents of young children are also advised to take this approach.

This seems to also apply to self-analysis, though to a lesser degree. If you know yourself well, and can predict what you would do in a specific situation, you may feel that your response is mechanistic or automatic and not agenty or intelligent. Or maybe not. I am not sure. I think if I had the capacity for full introspection, not just the surface level understanding of my thoughts and actions, I would ascribe much less agency to myself. Probably because it would cease to be a useful concept. I wonder if this generalizes to a superintelligence capable of perfect or near perfect self-reflection.

This leads us to the issue of feelings, deliberate choices, free will and ability to consent and take responsibility. These seem to be useful, if illusory, concepts for when you live among your intellectual peers and want to be treated at least as having as much agency as you ascribe to them. But this is a topic for a different post.