Agency is bugs and uncertainty

(Epistemic sta­tus: of­ten dis­cussed in bits in pieces, haven’t seen it sum­ma­rized in one place any­where.)

Do you feel that your com­puter some­times has a mind of its own? “I have no idea why it is do­ing that!” Do you feel that, the more you un­der­stand and pre­dict some­one’s ac­tion, the less in­tel­li­gent and more “me­chan­i­cal” they ap­pear?

My guess is that, in many cases, agency (as in, the ca­pac­ity to act and make choices) is a man­i­fes­ta­tion of the ob­server’s in­abil­ity to ex­plain and pre­dict the agent’s ac­tions. To Omega in the New­comb’s prob­lem hu­mans are just au­toma­tons with­out a hint of agency. To a game player some NPCs ap­pear stupid and oth­ers smart, and the more you play and the more you can pre­dict the NPCs, the less agenty they ap­pear to you.

Note that ran­dom­ness is not the same as un­cer­tainty, since if you can pre­dict that some­one or some­thing be­haves ran­domly, it is still a pre­dic­tion. What I mean is more of a Knigh­tian un­cer­tainty, where one fails to make a use­ful pre­dic­tion at all. Some­thing like a tor­nado may ap­pear to in­ten­tion­ally go af­ter you if you fail to pre­dict where it will be go­ing and you have trou­ble es­cap­ing.

If you are a user of a com­puter pro­gram, and it does not be­have as you ex­pect it to, you of­ten get a feel­ing of there be­ing a hos­tile in­tel­li­gence op­pos­ing you, oc­ca­sion­ally re­sult­ing in an ag­gres­sive be­hav­ior to­ward it, usu­ally with ver­bal vi­o­lence, though oc­ca­sion­ally get­ting phys­i­cal, the way we would con­front an ac­tual en­emy. On the other hand, if you are the pro­gram­mer who wrote the code in ques­tion, you think of the mis­be­hav­ior as bugs, not in­ten­tional hos­tility, and treat the code by de­bug­ging or doc­u­ment­ing. Mostly. Some­times I per­son­al­ize es­pe­cially nasty bugs.

I was told by a nurse that this is also how they are taught to treat difficult pa­tients: you don’t get up­set at some­one’s mis­be­hav­ior and in­stead treat them not as an agent, but more like an al­gorithm in need of de­bug­ging. Par­ents of young chil­dren are also ad­vised to take this ap­proach.

This seems to also ap­ply to self-anal­y­sis, though to a lesser de­gree. If you know your­self well, and can pre­dict what you would do in a spe­cific situ­a­tion, you may feel that your re­sponse is mechanis­tic or au­to­matic and not agenty or in­tel­li­gent. Or maybe not. I am not sure. I think if I had the ca­pac­ity for full in­tro­spec­tion, not just the sur­face level un­der­stand­ing of my thoughts and ac­tions, I would as­cribe much less agency to my­self. Prob­a­bly be­cause it would cease to be a use­ful con­cept. I won­der if this gen­er­al­izes to a su­per­in­tel­li­gence ca­pa­ble of perfect or near perfect self-re­flec­tion.

This leads us to the is­sue of feel­ings, de­liber­ate choices, free will and abil­ity to con­sent and take re­spon­si­bil­ity. Th­ese seem to be use­ful, if illu­sory, con­cepts for when you live among your in­tel­lec­tual peers and want to be treated at least as hav­ing as much agency as you as­cribe to them. But this is a topic for a differ­ent post.