Privacy

Fol­low-up to: Blackmail

[Note on Com­pass Rose re­sponse: This is not a re­sponse to the re­cent Com­pass Rose re­sponse, it was writ­ten be­fore that, but with my post on Hacker News I need to get this out now. It has been ed­ited in light of what was said. His first sec­tion is a new counter-ar­gu­ment against a par­tic­u­lar point that I made – it is in­ter­est­ing, and I have a re­sponse but it is be­yond scope here. It does not fall into ei­ther main cat­e­gory, be­cause it is ad­dress­ing a par­tic­u­lar ar­gu­ment of mine rather than be­ing a gen­eral ar­gu­ment for black­mail. The sec­ond counter-ar­gu­ment is a form of #1 be­low, com­bined with #2, #3 and #4 (they do tend to go to­gether) so it is ad­dressed some­what be­low, es­pe­cially the differ­ence be­tween ‘in­for­ma­tion tends to be good’ and ‘in­for­ma­tion cho­sen, en­g­ineered and shared so to be max­i­mally harm­ful tends to be bad.’ My model and Ben’s of prac­ti­cal re­sults also greatly differ. We in­tend to hash all this out in de­tail in con­ver­sa­tions, and I hope to have a write-up at some point. Any­way, on to the post at hand.]

There are two main cat­e­gories of ob­jec­tion to my ex­plicit the­sis that black­mail should re­main ille­gal.

To­day we will not ad­dress what I con­sider the more challeng­ing cat­e­gory. Claims that while black­mail is bad, mak­ing it ille­gal does not im­prove mat­ters. Mainly be­cause we can’t or won’t en­force laws, so it is un­clear what the point is. Or costs of en­force­ment ex­ceed benefits.

The cat­e­gory I ad­dress here claims black­mail is good. We want more.

Key ar­gu­ments in this cat­e­gory:

  1. In­for­ma­tion is good.*

  2. Black­mail re­veals bad be­hav­ior.

  3. Black­mail pro­vides in­cen­tive to un­cover bad be­hav­ior.

  4. Black­mail pro­vides a dis­in­cen­tive to bad be­hav­ior.

  5. Only bad, rich or elite peo­ple are vuln­er­a­ble to black­mail.

  6. We should strongly en­force all norms on ev­ery­one, with­out con­text de­pen­dence not ex­plic­itly writ­ten into the norm, and fix or dis­card any norms we don’t want to en­force in this way.

A key as­sump­tion is that black­mail mostly tar­gets ex­ist­ing true bad be­hav­ior. I do not think this is true. For true or bad or for ex­ist­ing. For de­tails, see the pre­vi­ous post.

Such ar­gu­ments also cen­trally ar­gue against pri­vacy. Black­mail ad­vo­cates of­ten claim pri­vacy is un­nec­es­sary or even toxic.

It’s one thing to give up on pri­vacy in prac­tice, for your­self, in the age of Face­book. I get that. It’s an­other to ar­gue that pri­vacy is bad. That it is bad to not re­veal all the in­for­ma­tion you know. In­clud­ing about your­self.

This rad­i­cal uni­ver­sal trans­parency po­si­tion, per­haps even as­sump­tion, comes up quite a lot re­cently. Those ad­vo­cat­ing it act as if those op­posed carry the bur­den of proof.

No. Pri­vacy is good.

A rea­son­able life, a good life, re­quires pri­vacy.

I

We need a realm shielded from sig­nal­ing and judg­ment. A place where what we do does not change what ev­ery­one thinks about us, or get us re­warded and pun­ished. Where oth­ers don’t judge what we do based on the as­sump­tion that we are choos­ing what we do know­ing that oth­ers will judge us based on what we do. Where we are free from oth­ers’ Bayesian up­dates and those of com­put­ers, from what is cor­re­lated with what, with how things look. A place to play. A place to ex­per­i­ment. To un­wind. To cel­e­brate. To learn. To vent. To be afraid. To mourn. To worry. To be your­self. To be real.

We need peo­ple there with us who won’t judge us. Who won’t use in­for­ma­tion against us.

We need hav­ing such trust to not risk our ruin. We need to min­i­mize how much we won­der, if some­one’s goal is to get in­for­ma­tion to use against us. Or what price would tempt them to do that.

Friends. We des­per­ately need real friends.

II

Norms are not laws.

Life is full of trade-offs and nec­es­sary un­pleas­ant ac­tions that vi­o­late norms. This is not a fix­able bug. Con­text is im­por­tant for both en­force­ment and in­tel­li­gent or use­ful ac­tion.

Even if we could fully en­force norms in prin­ci­ple, differ­ent groups have differ­ent such norms and each group’s/​per­son’s norms are self-con­tra­dic­tory. Hard de­ci­sions mean vi­o­lat­ing norms and are com­mon in the best of times.

A com­plete trans­for­ma­tion of our norms and norm prin­ci­ples, be­yond any­thing I can think of in a healthy his­tor­i­cal so­ciety, would be re­quired to even at­tempt full non-con­tex­tual strong en­force­ment of all re­main­ing norms. It is un­clear how one would avoid a to­tal loss of free­dom, or a to­tal loss of rea­son­able ac­tion, pro­duc­tivity and sur­vival, in such a con­text. Po­lice states and cults and thought po­lice and similar ideas have been tried and have definitely not im­proved this out­look.

What we do for fun. What we do to make money. What we do to stay sane. What we do for our friends and our fam­i­lies. What main­tains or­der and civ­i­liza­tion. What must be done.

Ne­c­es­sary ac­tions are of­ten the very things oth­ers wouldn’t like, or couldn’t han­dle… if re­vealed in full, with con­text sim­plified to what gut re­ac­tions can han­dle.

Or worse, with con­text cho­sen to have the max­i­mally nega­tive gut re­ac­tions.

There are also known dilem­mas where any ac­tion taken would be a norm vi­o­la­tion of a sa­cred value. And lots of val­ues that claim to be sa­cred, be­cause ev­ery value wants to be sa­cred, but which we know we must treat as not sa­cred when mak­ing real de­ci­sions with real con­se­quences.

Or in many con­texts, jus­tify­ing our ac­tions would re­quire re­veal­ing mas­sive amounts of pri­vate in­for­ma­tion that would then cause fur­ther harm (and which peo­ple very much do not have the time to prop­erly ab­sorb and con­sider). Mean­while, you’re tak­ing about the bad-sound­ing thing, which digs your hole deeper.

We all must do these nec­es­sary things. Th­ese of­ten vi­o­late both norms and for­mal laws. Ex­plain­ing them of­ten re­quires shar­ing other things we dare not share.

I wish ev­ery­one a past and fu­ture Happy Petrov Day

Part of the job of mak­ing sausage is to al­low oth­ers not to see it. We still get re­li­ably dis­gusted when we see it.

We con­stantly must claim ‘ev­ery­thing is go­ing to be all right’ or ‘ev­ery­thing is OK.’ That’s never true. Ever.

In these, and in many other ways, we live in an un­usu­ally hyp­o­crit­i­cal time. A time when peo­ple need be far more afraid both to not be hyp­o­crit­i­cal, and of their hypocrisy be­ing re­vealed.

We are a na­tion of men, not of laws.

But these prob­lems, while im­proved, wouldn’t go away in a bet­ter or less hyp­o­crit­i­cal time. Norms are not a sys­tem that can have full well-speci­fied con­text de­pen­dence and be uni­ver­sally en­forced. That’s not how norms work.

III

Life re­quires pri­vacy so we can not re­veal the ex­act ex­tent of our re­sources.

If oth­ers know ex­actly what re­sources we have, they can and will take all of them. The tax man who knows what you can pay, what you would pay, already knows what you will pay. For gov­ern­ment taxes, and for other types of taxes.

This is not only about pay­ments in money. It is also about time, and emo­tion, and cre­ativity, and ev­ery­thing else.

Many things in life claim to be sa­cred. Each claims all known available re­sources. Each claims we are blame­wor­thy for any re­sources we hold back. If we hold noth­ing back, we have noth­ing.

That which is fully ob­served can­not be one’s slack. Once all con­straints are known, they bind.

Slack re­quires pri­vacy. Life re­quires slack.

The in­cludes our de­ci­sion mak­ing pro­cess.

If it is known how we re­spond to any given ac­tion, oth­ers find best re­sponses. They will re­spond to in­cen­tives. They ex­ploit ex­actly the amount we won’t re­tal­i­ate against. They feel safe.

We seethe and de­spair. We have no choices. No agency. No slack.

It is a key pro­tec­tion that one might fight back, per­haps mas­sively out of pro­por­tion, if oth­ers went af­ter us. To any ex­tent.

It is a key pro­tec­tion that one might do some­thing good, if oth­ers helped you. Rather than oth­ers know­ing ex­actly what things will cause you to do good things, and which will not.

It is cen­tral that one re­act when oth­ers are gam­ing the sys­tem.

Some­times that sys­tem is you.

World peace, and do­ing any­thing at all that in­ter­acts with oth­ers, de­pends upon both strate­gic con­fi­dence in some places, and strate­gic am­bi­guity in oth­ers. We need to choose care­fully where to use which.

Hav­ing all your ac­tions fully pre­dictable and all your in­for­ma­tion known isn’t Play­ing in Hard Mode. That’s Im­pos­si­ble Mode.

I now give spe­cific re­sponses to the six claims above. This mostly sum­ma­rizes from the pre­vi­ous post.

  1. In­for­ma­tion, by de­fault, is prob­a­bly good. But this is a ten­ancy, not a law of physics. As dis­cussed last time, in­for­ma­tion en­g­ineered to be lo­cally harm­ful prob­a­bly is net harm­ful. Keep this dis­tinct from in­cen­tive effects on bad be­hav­ior, which is ar­gu­ment num­ber 4.

  2. Most ‘bad’ be­hav­ior will be a jus­tifi­ca­tion for scape­goat­ing, in­volv­ing lev­els of bad be­hav­ior that are com­mon. Since such bad be­hav­ior is rarely made com­mon knowl­edge, and al­low­ing it to be­come com­mon knowl­edge is of­ten con­sid­ered far worse be­hav­ior than the origi­nal ac­tion, mak­ing it com­mon knowl­edge forces over­size re­ac­tion and pun­ish­ment. What peo­ple are pun­ish­ing is that you are the type of per­son who lets this type of in­for­ma­tion be­come com­mon knowl­edge about you. Thus you are not a good ally. In a world like ours, where all are an­ti­ci­pat­ing fu­ture re­ac­tions by oth­ers an­ti­ci­pat­ing fu­ture re­ac­tions, this can be dev­as­tat­ing.

  3. Black­mail does provide in­cen­tive to in­ves­ti­gate to find bad be­hav­ior. But if found, it also pro­vides in­cen­tive to make sure it is never dis­cov­ered. And what is ex­tracted from the tar­get is of­ten fur­ther bad be­hav­ior, largely be­cause…

  4. Black­mail also pro­vides an in­cen­tive to en­g­ineer or pro­voke bad be­hav­ior, and to max­i­mize the dam­age that would re­sult from rev­e­la­tion of that be­hav­ior. The in­cen­tives pro­mot­ing more bad be­hav­ior likely are stronger than the ones dis­cour­ag­ing it. I ar­gue in the last piece that it is com­mon even now for peo­ple to en­g­ineer black­mail ma­te­rial against oth­ers and of­ten also against them­selves, to al­low it to be used as col­lat­eral and lev­er­age. That a large part of job in­ter­views is prov­ing that you are vuln­er­a­ble in these ways. That much bond­ing is about cre­at­ing mu­tual black­mail ma­te­rial. And so on. This seems quite bad.

  5. If any money one has can be ex­tracted, then one will per­ma­nently be broke. This is a lot of my model of poverty traps – there are enough claiming-to-be-sa­cred things de­mand­ing re­sources that any re­sources get ex­tracted, so no one tries to ac­quire re­sources or hold them for long. Con­sider what hap­pens if peo­ple in such situ­a­tions are al­lowed to bor­row money. Even if you are (for any rea­son) suffi­ciently broke that you can­not pay money, you have much that you could be forced to say or do. Often this in­volves deep com­pro­mises of sa­cred val­ues, of ethics and morals and truth and loy­alty and friend­ship. It of­ten in­volves be­ing an ally of those you de­spise, and re­in­forc­ing that which is mak­ing your life a liv­ing hell, to get the pain to let up a lit­tle. Pri­vacy, and the free­dom from black­mail, are the only ways out.

  6. A full ex­plo­ra­tion is be­yond scope but sec­tion two above is a sketch.

* – I want to be very clear that yes, in­for­ma­tion in gen­eral is good. But that is a far cry from the rad­i­cal claim that all and any in­for­ma­tion is good and shar­ing more of it is ev­ery­where and always good.