Player vs. Character: A Two-Level Model of Ethics

Link post

Epistemic Sta­tus: Confident

This idea is ac­tu­ally due to my hus­band, An­drew Ret­tek, but since he doesn’t blog, and I want to be able to re­fer to it later, I thought I’d write it up here.

In many games, such as Magic: The Gather­ing, Hearth­stone, or Dun­geons and Dragons, there’s a two-phase pro­cess. First, the player con­structs a deck or char­ac­ter from a very large sam­ple space of pos­si­bil­ities. This is a par­tic­u­lar com­bi­na­tion of strengths and weak­nesses and ca­pa­bil­ities for ac­tion, which the player thinks can be suc­cess­ful against other decks/​char­ac­ters or at win­ning in the game uni­verse. The choice of deck or char­ac­ter of­ten de­ter­mines the strate­gies that deck or char­ac­ter can use in the sec­ond phase, which is ac­tual game­play. In game­play, the char­ac­ter (or deck) can only use the af­for­dances that it’s been pre­vi­ously set up with. This means that there are two sep­a­rate places where a player needs to get things right: first, in de­sign­ing a strong char­ac­ter/​deck, and sec­ond, in ex­e­cut­ing the op­ti­mal strate­gies for that char­ac­ter/​deck dur­ing game­play.

(This is in con­trast to games like chess or go, which are sin­gle-level; the ca­pac­i­ties of black and white are set by the rules of the game, and the only prob­lem is how to ex­e­cute the op­ti­mal strat­egy. Ob­vi­ously, even sin­gle-level games can already be com­plex!)

The idea is that hu­man be­hav­ior works very much like a two-level game.

The “player” is the whole mind, choos­ing sub­con­scious strate­gies. The “elephant“, not the “rider.” The player is very in­fluenced by evolu­tion­ary pres­sure; it is built to di­rect be­hav­ior in ways that in­creases in­clu­sive fit­ness. The player di­rects what we per­ceive, do, think, and feel.

The player cre­ates what we ex­pe­rience as “per­son­al­ity”, fairly early in life; it no­tices what strate­gies and skills work for us and in­vests in those at the ex­pense of oth­ers. It builds our “char­ac­ter sheet”, so to speak.

Note that even things that seem like “in­nate” tal­ents, like the sa­vant skills or hy­per­a­cute senses some­times ob­served in autis­tic peo­ple, can be ob­served to be tightly linked to feed­back loops in early child­hood. In other words, sa­vants prac­tice the thing they like and are good at, and gain “su­per­hu­man” skill at it. They “prac­tice” along a faster and more hy­per­spe­cial­ized path than what we think of as a neu­rotyp­i­cal “prac­tic­ing hard,” but it’s still a learn­ing pro­cess. Sa­vant skills are more rigidly fixed and seem­ingly “au­to­matic” than non-sa­vant skills, but they still change over time — e.g. Stephen Wilt­shire, a sa­vant artist who man­i­fested an abil­ity to draw hy­per-ac­cu­rate per­spec­tive draw­ings in early child­hood, has changed and adapted his art style as he grew up, and even ac­quired new sa­vant tal­ents in mu­sic. If even sa­vant tal­ents are sub­ject to learn­ing and in­cen­tives/​re­wards, cer­tainly or­di­nary strengths, weak­nesses, and per­son­al­ity types are likely to be “strate­gic” or “evolved” in this sense.

The player de­ter­mines what we find re­ward­ing or un­re­ward­ing. The player de­ter­mines what we no­tice and what we over­look; things come to our at­ten­tion if it suits the player’s strat­egy, and not oth­er­wise. The player gives us emo­tions when it’s strate­gic to do so. The player sets up our sub­con­scious eval­u­a­tions of what is good for us and bad for us, which we ex­pe­rience as “lik­ing” or “dis­lik­ing.”

The char­ac­ter is what ex­e­cut­ing the player’s strate­gies feels like from the in­side. If the player has de­cided that a task is unim­por­tant, the char­ac­ter will ex­pe­rience “for­get­ting” to do it. If the player has de­cided that al­li­ance with some­one will be in our in­ter­ests, the char­ac­ter will ex­pe­rience “lik­ing” that per­son. Some­times the player will no­tice and seize op­por­tu­ni­ties in a very strate­gic way that feels to the char­ac­ter like “be­ing lucky” or “be­ing in the right place at the right time.”

This is where con­fu­sion of­ten sets in. Peo­ple will of­ten protest “but I did care about that thing, I just for­got” or “but I’m not that Machi­avel­lian, I’m just do­ing what comes nat­u­rally.” This is true, be­cause when we talk about our­selves and our ex­pe­riences, we’re speak­ing “in char­ac­ter”, as our char­ac­ter. The strat­egy is not go­ing on at a con­scious level. In fact, I don’t be­lieve we (char­ac­ters) have di­rect ac­cess to the player; we can only in­fer what it’s do­ing, based on what pat­terns of be­hav­ior (or thought or emo­tion or per­cep­tion) we ob­serve in our­selves and oth­ers.

Evolu­tion­ary psy­chol­ogy refers to the player’s strat­egy, not the char­ac­ter’s. (It’s un­clear which an­i­mals even have char­ac­ters in the way we do; some an­i­mals’ be­hav­ior may all be “sub­con­scious”.) So when some­one speak­ing in an evolu­tion­ary-psy­chol­ogy mode says that ba­bies are ma­nipu­lat­ing their par­ents to not have more chil­dren, for in­stance, that ob­vi­ously doesn’t mean that my baby is a cyn­i­cally ma­nipu­la­tive evil ge­nius. To him, it prob­a­bly just feels like “I want to nurse at night. I miss Mama.” It’s perfectly in­no­cent. But of course, this has the effect that I can’t have more chil­dren un­til I wean him, and that’s to his in­ter­est (or, at least, it was in the an­ces­tral en­vi­ron­ment when food was more scarce.)

Sza­szian or evolu­tion­ary anal­y­sis of men­tal ill­ness is ab­surd if you think of it as ap­ply­ing to the char­ac­ter — of course no­body wakes up in the morn­ing and de­cides to have a men­tal ill­ness. It’s not “strate­gic” in that sense. (If it were, we wouldn’t call it men­tal ill­ness, we’d call it feign­ing.) But at the player level, it can be fruit­ful to ask “what strat­egy could this be­hav­ior be serv­ing the per­son?” or “what ex­pe­riences could have made this be­hav­ior adap­tive at one point in time?” or “what in­cen­tives are shap­ing this be­hav­ior?” (And, of course, ex­ter­nally visi­ble “be­hav­ior” isn’t the only thing the player pro­duces: thoughts, feel­ings, and per­cep­tions are also pro­duced by the brain.)

It may make more sense to frame it as “what strat­egy is your brain ex­e­cut­ing?” rather than “what strat­egy are you ex­e­cut­ing?” since peo­ple gen­er­ally iden­tify as their char­ac­ters, not their play­ers.

Now, let’s talk moral­ity.

Our in­tu­itions about praise and blame are driven by moral sen­ti­ments. We have emo­tional re­sponses of sym­pa­thy and an­tipa­thy, to­wards be­hav­ior of which we ap­prove and dis­ap­prove. Th­ese are driven by the player, which cre­ates in­cen­tives and strate­gic be­hav­ior pat­terns for our char­ac­ters to play out in ev­ery­day life. The char­ac­ter en­gages in coal­i­tion-build­ing with other char­ac­ters, forms and breaks al­li­ances with other char­ac­ters, hon­ors and shames char­ac­ters ac­cord­ing to their be­hav­ior, sig­nals to other char­ac­ters, etc.

When we, speak­ing as our char­ac­ters, say “that per­son is good” or “that per­son is bad”, we are mak­ing one move in an over­all strat­egy that our play­ers have cre­ated. That strat­egy is the de­ter­mi­na­tion of when, in gen­eral, we will call things or peo­ple “good” or “bad”.

This is pre­cisely what Niet­zsche meant by “be­yond good and evil.” Our no­tions of “good” and “evil” are char­ac­ter-level no­tions, en­coded by our play­ers.

Imag­ine that some­where in our brains, the player has drawn two car­toons, marked “hero” and “villain”, that we con­sult when­ever we want to check whether to call an­other per­son “good” or “evil.” (That’s an over­sim­plifi­ca­tion, of course, it’s just for illus­tra­tive pur­poses.) Now, is the choice of car­toons it­self good or evil? Well, the char­ac­ter checks… “Ok, is it more like the hero car­toon or the villain car­toon?” The an­swer is “um­mmm….type er­ror.”

The player is not like a hero or a villain. It is not like a per­son at all, in the usual (char­ac­ter-level) sense. Char­ac­ters have feel­ings! Play­ers don’t have feel­ings; they are be­ings of pure strat­egy that cre­ate feel­ings. Char­ac­ters can have virtues or vices! Play­ers don’t; they cre­ate virtues or vices, strate­gi­cally, when they build the “char­ac­ter sheet” of a char­ac­ter’s skills and mo­ti­va­tions. Char­ac­ters can be eval­u­ated ac­cord­ing to moral stan­dards; play­ers set those moral stan­dards. Play­ers, com­pared to we char­ac­ters, are hy­per­in­tel­li­gent Love­craf­tian crea­tures that we can­not re­late to so­cially. They are be­yond good and evil.

How­ever! There is an­other, very differ­ent sense in which play­ers can be eval­u­ated as “moral agents”, even though our moral sen­ti­ments don’t ap­ply to them.

We can ob­serve what var­i­ous game-the­o­retic strate­gies do and how they perform. Some, like “tit for tat”, perform well on the whole. Tit-for-tat-play­ing agents co­op­er­ate with each other. They can sur­vive pretty well even if there are differ­ent kinds of agents in the pop­u­la­tion; and a pop­u­la­tion com­posed en­tirely of tit-for-tat-ers is sta­ble and well-off.

While we can’t call cel­lu­lar au­tomata perform­ing game strate­gies “good guys” or “bad guys” in a sen­ti­men­tal or so­cially-judg­men­tal way (they’re not peo­ple), we can to­tally make ob­jec­tive claims about which strate­gies dom­i­nate oth­ers, or how strate­gies in­ter­act with one an­other. This is an em­piri­cal and the­o­ret­i­cal field of sci­ence.

And there is a kind of “”moral­ity”” which I al­most hes­i­tate to call moral­ity be­cause it isn’t very much like so­cial-sen­ti­ment-moral­ity at all, but which is very im­por­tant, which says sim­ply: the strate­gies that win in the long run are good, the ones that lose in the long run are bad. Not “like the hero car­toon” or “like the villain car­toon”, but sim­ply “win” and “lose.”

At this level you can say “look, ob­jec­tively, peo­ple who set up their ta­bles of val­ues in this way, call­ing X good and Y evil, are gonna die.” Or “this strat­egy is con­duct­ing a cam­paign of un­sus­tain­able ex­ploita­tion, which will work well in the short run, but will flame out when it runs out of re­sources, and so it’s gonna die.” Or “this strat­egy is go­ing to lose to that strat­egy.” Or “this strat­egy is fine in the best-case sce­nario, but it’s not ro­bust to noise, and if there are any nega­tive shocks to the sys­tem, it’s go­ing to re­sult in ev­ery­body dy­ing.

“But what if a los­ing strat­egy is good?” Well, if you are in that value sys­tem, of course you’ll say it’s good. Also, you will lose.

Mother Teresa is a saint, in the literal sense: she was can­on­ized by the Ro­man Catholic Church. Also, she pro­vided poor med­i­cal care for the sick and des­ti­tute — un­ster­il­ized nee­dles, no pain re­lief, con­di­tions in which tu­ber­cu­lo­sis could and did spread. Was she a good per­son? It de­pends on your value sys­tem, and, ob­vi­ously, ac­cord­ing to some value sys­tems she was. But, it seems, that a pop­u­la­tion that places Mother Teresa as its ideal (rel­a­tive to, say, Florence Night­in­gale) will be a pop­u­la­tion with more deaths from ill­ness, not fewer, and more pain, not less. A strat­egy that says “show­ing care for the dy­ing is bet­ter than pro­mot­ing health” will lose to one that ac­tu­ally can re­ward ac­tions that pro­mote health. That’s the “player-level” anal­y­sis of the situ­a­tion.

Some game-the­o­retic strate­gies (what Niet­zsche would call “ta­bles of val­ues”) are more sur­vival-pro­mot­ing than oth­ers. That’s the sense in which you can get from “is” to “ought.” The Golden Rule (Hillel’s, Je­sus’s, Con­fu­cius’s, etc) is a “law” of game the­ory, in the sense that it is a uni­ver­sal, ab­stract fact, which even a Love­craf­tian alien in­tel­li­gence would rec­og­nize, that it’s an effec­tive strat­egy, which is why it keeps be­ing re­dis­cov­ered around the world.

But you can’t ad­ju­di­cate be­tween char­ac­ter strate­gies just by be­ing a char­ac­ter play­ing your strat­egy. For in­stance, a Demo­crat usu­ally can’t con­vert a Repub­li­can just by be­ing a Demo­crat at him. To change a player’s strat­egy is more like “get­ting the body­mind to change its fun­da­men­tal as­sess­ments of what is in its best in­ter­ests.” Which can hap­pen, and can hap­pen de­liber­ately and with the guidance of the in­tel­lect! But not with­out some…what you might call, wig­gling things around.

The way I think the in­tel­lect plays into “metapro­gram­ming” the player is in­di­rect; you can in­fer what the player is do­ing, do some for­mal anal­y­sis about how that will play out, com­pre­hend (again at the “merely” in­tel­lec­tual level) if there’s an er­ror or some­thing that’s no longer rele­vant/​adap­tive, plug that new un­der­stand­ing into some change that the in­tel­lect can af­fect (maybe “let’s try this ex­per­i­ment”), and maybe some­where down the chain of causal­ity the “player”’s strat­egy changes. (Ex­po­sure ther­apy is a sim­ple ex­am­ple, prob­a­bly much sim­pler than most: add some ex­pe­riences of the thing not be­ing dan­ger­ous and the player de­ter­mines it re­ally isn’t dan­ger­ous and stops gen­er­at­ing fear emo­tions.)

You don’t get changes in player strate­gies just by ex­e­cut­ing so­cial praise/​blame al­gorithms though; those al­gorithms are for in­ter­act­ing with other char­ac­ters. Me­tapro­gram­ming is… I want to say “cold” or “non­judg­men­tal” or “aso­cial” but none of those words are quite right, be­cause they de­scribe char­ac­ter traits or per­son­al­ities or men­tal states and it’s not a char­ac­ter-level thing at all. It’s a thing Love­craf­tian in­tel­li­gences can do to them­selves, in their pe­cu­liar ten­ta­cled way.