Emotional valence as cognition mutator (not a bug, but a feature)

Maybe be­cause of my neu­rotype I view it im­por­tant that I have good gen­eral way of han­dling things. I no­ticed a a pat­tern where peo­ple would pro­cess seem­ingly similar re­quests very differ­ently and got cu­ri­ous whether there is an ac­tual differ­ence or are peo­ple be­ing need­lessly in­con­sis­tent.

  • Can you pass the salt, please?

  • Step aside

  • Can I have your wallet, please?

2 of these are sup­posed to have a “well, he did ask” kind of re­ac­tion and the rem­i­naing a “no, why would I?”. The differ­ence in sug­gestibil­ity is great al­thought they all are sup­posed to be straight for­ward re­quests for X or a “can i have X? yes/​no” kind of ques­tions.

Now if I give a com­pli­cated an­swer why you should give me your wallet you are likely to re­main pes­simistic about com­ply­ing. This also seems to have the weird prop­erty of in­creas­ing the so­phis­ti­ca­tion of the ar­gu­ment is un­likely to in­crease com­pli­ance prob­a­bil­ity. It would seem like the per­son is not will­ing to en­ter­tain the propo­si­tion but is prej­u­diced to re­ject it from the get go. This would seem to go con­trary to “in­tel­lec­tual open­ness/​readi­ness”.

How­ever the be­havi­our is not re­ally mys­te­ri­ous and the rea­sons for it are pretty well founded. Your wallet con­tains a lot of your money which is im­por­tant for a lot of things you do. Should some­thing bad hap­pen to it a lot of things will get a lot messier. It’s also an at­trac­tive prey tar­get. It’s plau­si­ble that some­one might lie to you just to get a hold on your wallet just to have its pos­ses­sion.

When I thought about a straight up thief try­ing to get the wallet by ask­ing I re­al­ised that in­creas­ing the so­phis­ti­ca­tion of the rea­son why to give up the wallet is not a good strat­egy. How­ever ask­ing for any “in­no­cent” tar­get is likely to en­counter a lot more sug­gestibil­ity. Some of these sug­ges­tions might put you in the po­si­tion to bet­ter grab the wallet. That is a “Hey what’s over there?” and point­ing away and then phys­i­cally grab­bing the wallet is likely to be more effec­tive than a so­phis­ti­cated ar­gu­ment why to give the wallet. The strange thing here seems to be that if the tar­get per­cieves your mo­ti­va­tion to be the wallet it’s effec­tively game over to you as the thief. A lot of the tar­gets psy­cholog­i­cal defences know to ac­ti­vate on that cue.

The sur­pris­ing re­sult that I ended up on upon think­ing is that the phe­nomenon is a le­git psy­cholog­i­cal defence and it’s pres­ence is ac­tu­ally con­struc­tive. But ab­stract­ing it into other spheres it means that in situ­a­tions where we are han­dling re­quests to sys­tem cr­ti­cal phe­nom­ena we have a in­creased weight to un­der­stand the re­quest to a higher de­gree. It’s not that the agent goes emo­tional and throws rea­son out of the win­dow. It’s pre­cisely op­po­site in that the agent cor­rectly iden­ti­fies that this needs to be un­der­stood and pro­cessed cor­rectly. And it can’t be blan­ket re­jected be­cause there is a minor­ity of the situ­a­tions where we ac­tu­ally want to com­ply with the re­quest.

There is also a prin­ci­ple of a kind of bur­den of proof here. If I don’t un­der­stand it I am go­ing to re­ject it. Even if you use logic that is “more in­tel­lec­tual” than me. This bur­den of proof doesn’t ap­ply nor­mally. Nor­mally “you know what you are do­ing” can be a rea­son to give you the benefit of doubt. I don’t need to know where you are about to walk to step out of your way. But in these high-stakes situ­a­tions I do need to know the de­tails.

The ul­ti­mate such high-stakes situ­a­tion would be the AI-box­ing situ­a­tion. In ba­sic com­mu­ni­ca­tion when peo­ple hear about the prob­lem they liken it more to the “Can you pass the salt, please” kind of prob­lem or pat­tern recog­nise “friendli­ness” as a kind of aca­demic cu­ri­os­ity. A method of com­mu­ni­ca­tion that would liken it to the ask­ing of wallet would make poe­ple em­ploy their la­tent psy­holog­i­cal skills to the prob­lem. Here is one mini at­tempt at it. Si­tu­a­tion 1: You have hitler in a cell and he asks you to let him go out of the cell. You re­ply “No, you are f Hitler”. Si­tu­a­tion 2: you have Hitler and a in­no­cent per­son in a cell. For some strange rea­son you don’t know how Hitler looks like. A man be­yond the bars asks “Let me, go I am in­no­cent”, you re­ply “No you are Hitler try­ing to lie you are the in­no­cent one to get out jail”, “What can I do to prove to you I am not Hitler?”. In this kind of situ­a­tion it’s clear that let­ting an in­no­cent man walk is de­sir­able but clearly not worth hav­ing to deal with Hitler again, even if we have no rea­son to think that Hitler is im­mi­di­ately about to com­mit some­thing bad.

As the AI-box­ing prob­lem was pre­sented some­times it felt it was pre­sented as a unique prob­lem per­haps re­quiring unique an­swers. But the vari­able sug­gestibil­ity scales kind of high­lights that nat­u­ral in­tel­li­gences box each other all the time already. We are ca­pa­ble of trust­ing each other in some situ­a­tions but also ca­pa­ble of for­go­ing trust when it’s necce­sary.

No comments.