When is it Wrong to Click on a Cow?

[Spec­u­la­tive philos­o­phy. Origi­nally on Grand, Unified, Crazy.]

Three Stories

Imag­ine, for a mo­ment, three young adults re­cently em­barked on the same promis­ing ca­reer path. The first comes home from work each day, and spends their evenings prac­tis­ing and play­ing a mu­si­cal in­stru­ment. The sec­ond comes home from work each day, and spends their evenings prac­tis­ing and play­ing a video game. The third comes home from work each day, and spends their evenings hooked up to a ma­chine which di­rectly stim­u­lates the plea­sure and re­ward cen­tres of their brain.

How do these peo­ple make you feel?

For some peo­ple with more liber­tar­ian, util­i­tar­ian, or he­do­nis­tic per­spec­tives, all three peo­ple are equally pos­i­tive. They harm no-one, and are spend­ing their time on ac­tivi­ties they en­joy and freely chose. We can ask noth­ing more of them.

And yet this per­spec­tive does not line up with my in­tu­itions. For me, and I sus­pect for many peo­ple, the mu­si­cian’s choice of hobby is laud­able, the gamer’s is rel­a­tively neu­tral, and the “stim­mer”’s (the per­son with the brain-stim­u­lat­ing ma­chine) is dis­tinctly re­pug­nant in a way that feels vaguely ethics-re­lated. It may be difficult to ac­tu­ally draw that re­pug­nance out in clear moral lan­guage – af­ter all, no-one is be­ing harmed – but still… they’re not the kind of per­son you’d want your chil­dren to marry.

The Good and The Bad

Un­tan­gling the “why” of these in­tu­itions is quite an in­ter­est­ing prob­lem. Tech­ni­cally all three hob­bies rely on hi­jack­ing the re­ward cen­tres of the brain, whose origi­nal evolu­tion­ary ad­van­tages were more to do with food, sex, and other sur­vival-re­lated tasks. There’s a fairly short path from ar­gu­ing that the stim­mer’s be­havi­our is re­pug­nant to ar­gu­ing that all three cases are re­pug­nant; af­ter all none of them re­sult in food or any­thing truly “pro­duc­tive”. But this tack also seems to go a bit against our in­tu­itions.

For­tu­nately, the world has a lot of differ­ent video games, and we can use that range to draw out some more con­crete differ­ences. At the low-end are games like Cow Clicker and Cookie Clicker, which are so ba­sic as to be lit­tle more than in­di­rect ver­sions of the re­ward-cen­tre-stim­u­lat­ing ma­chine. More com­plex games seem to in­tu­itively fare a lit­tle bet­ter, as do games with a non-triv­ial so­cial el­e­ment. Games that di­rectly at­tempt to train us in some way also seem to do a lit­tle bet­ter, whether they ac­tu­ally work or not.

Gen­er­al­iz­ing slightly, it seems like the things we care about to make video games more “pos­i­tive” are roughly: trans­fer­able skills, per­sonal growth, and so­cial con­tact. But this model doesn’t seem to fit so well when ap­plied to learn­ing an in­stru­ment. You could ar­gue that it in­cludes trans­fer­able skills, but the ob­vi­ous can­di­dates only trans­fer to other in­stru­ments and forms of mu­si­ci­an­ship, not to any­thing strictly “prac­ti­cal”. Similarly, so­cial con­tact is a pos­i­tive, but it’s not a re­quired com­po­nent of learn­ing an in­stru­ment. Play­ing in a group seems dis­tinctly bet­ter than learn­ing it by your­self, but learn­ing it on your own still seems like a net pos­i­tive. Our fi­nal op­tion of “per­sonal growth” now seems very wishy-washy. Yes, learn­ing an in­stru­ment seems to be a clear case of per­sonal growth, but… what does that mean ex­actly? How is it use­ful, if it doesn’t in­clude trans­fer­able skills or so­cial con­tact?

There are a few pos­si­ble ex­pla­na­tions that I’m not go­ing to ex­plore fully in this post, since it would take us a bit far afield from the point I origi­nally wanted to ad­dress. For one, per­haps mu­sic is seen as more of a shared or pub­lic good, one that nat­u­rally in­creases so­cial co­he­sion. It seems plau­si­ble that maybe our in­tu­itions just can’t ac­count for some­body learn­ing mu­sic en­tirely in pri­vate, with no so­cial benefits.

Another ap­proach would be to lean on Jonathan Haidt’s A Righ­teous Mind and its Mo­ral Foun­da­tions The­ory. Cer­tainly none of the three peo­ple are caus­ing harm with their ac­tions, but per­haps they are trig­ger­ing one of our weirder loy­alty or sanc­tity in­tu­itions?

Thirdly, per­haps the is­sue with the third hobby is less “it’s not use­ful” and more of a con­cern than it’s ac­tively dan­ger­ous. We know from ex­per­i­ments on rats (and a few un­eth­i­cal ones on hu­mans) that such ma­chines can lead to ad­dic­tive be­havi­our and very dan­ger­ous dis­re­gard for food and other crit­i­cal needs. Per­haps as video games be­come more in­di­rect, they be­come less ad­dic­tive and sim­ply less dan­ger­ous.

Mo­ral Obligations

Really though, these ques­tions are be­ing un­packed in or­der to an­swer the more in­ter­est­ing one in this es­say’s ti­tle: when is it wrong to click on a cow? Or slightly less metaphor­i­cally: what moral obli­ga­tions do we have around how we spend our leisure time? Should I feel bad about read­ing a book if it doesn’t teach me any­thing? Should I feel bad about go­ing out to see a show if it’s not some deep philo­soph­i­cal ex­plo­ra­tion of the hu­man spirit? What about the widely-shat-upon genre of re­al­ity tele­vi­sion?

Even more dis­turbingly, what are the im­pli­ca­tions for just hang­ing out with your friends? Surely that’s still a good thing?

If I gen­er­al­ize my in­tu­itions well past my abil­ity to back them up with rea­son, we have some weak moral obli­ga­tion to spend our time in a way that benefits our group, ei­ther through di­rect de­vel­op­ment of pub­li­cly benefi­cial skills like mu­sic, or through more gen­eral self-im­prove­ment in one form or an­other, or through so­cial­iz­ing and so­cial play and the re­sult­ing group bond­ing. Any­thing that we do en­tirely with­out benefit to oth­ers is onanis­tic and prob­a­bly wrong.

The fi­nal ques­tion is then: what if that isn’t what I find en­joy­able? How much room is there in life for read­ing trashy nov­els and watch­ing the Kar­dashi­ans? The moral ab­solutist in me sug­gests that there is none; that we must do our best to op­ti­mize what lit­tle time we have as effec­tively as pos­si­ble. But that’s a topic for an­other post.