Is this future AI catastrophe? Or is this just a description of current events being a general gradual collapse?
This seems like what is happening now, and has been for a while. Existing ML systems are clearly making Type-I problems, already quite bad before ML was a thing at all, much worse, to the extent that I don’t see much ability left of our civilization to get anything that can’t be measured in a short term feedback loop—even in spaces like this, appeals to non-measurable or non-explicit concerns are a near-impossible sell.
Part II problems are not yet coming from ML systems, exactly, But we certainly have algorithms that are effectively optimized and selected for the ability to gain influence; the algorithm gains influence, which causes people to care about it and feed into it, causing it to get more. If we get less direct in the metaphor we get the same thing with memetics, culture, life strategies, corporations, media properties and so on. The emphasis on choosing winners, being ‘on the right side of history’, supporting those who are good at getting support. OP notes that this happens in non-ML situations explicitly, and there’s no clear dividing line in any case.
So if there is another theory that says, this has already happened, what would one do next?
This distinction seems super valuable. What I find most interesting is that I would have labeled what OP calls Rest as Recovery, and what it calls Recovery as Rest...
I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
>> What I’m primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that—you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn’t both common knowledge of this and agreement on what is and isn’t just). The primary concern isn’t whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.
We need people there with us who won’t judge us. Who won’t use information against us.
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person—to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.
A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
>> That doesn’t follow at all, and I’m confused why you think that it does. I’m saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don’t see a way to do that. Not that things wouldn’t change—I’m sure they would.
There are also known dilemmas where any action taken would be a norm violation of a sacred value.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that’s not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.
Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one’s ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.
>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we’d rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don’t want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.
We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
>> OK, this one’s just straight up correct if you remove the unjust regime part. Also, I am married with children.
But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).
If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy—e.g. expensive medical treatments, or in-need family members, etc etc.
If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty—the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.
World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
>> Strategic ambiguity is necessary for the surveillance state so that people can’t do everything the state didn’t explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we’re-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don’t know exactly what will cause the people to rise up, or you’ll treat them as bad as won’t do that. And of course I was also talking explicitly about things like ‘if you cross that border we will be at war’ - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don’t default to being a good idea if someone gives invalid arguments against them!
I replied to this comment on my blog (https://thezvi.wordpress.com/2019/03/15/privacy/#comment-3827)
They would not change it back.
Yes. Long post is long and I didn’t want to throw out arguments about particular reveals to show this—in particular, we all think the cost of that should be zero in that case, and we all know it often very much isn’t. And I didn’t want anyone to think I was relying on that.
I could have worded it to make this more clear but I think the point stands when clarified/understood—the proximate goal of the blackmail release is to be harmful, whereas the proximate goal of the gossip might or might not be.
If others agree it is misleading I will make this more explicit.
Yes. It’s doing a few things, and that’s a lot of it.
We’re not out. Certainly we’re not out of games—e.g. Magic: The Gathering. Which would be a big leap.
For actual basic board games, the one I want to see is Stratego, actually; the only issue is I don’t know if there are humans who have bothered to master it.
Important not to let the perfect be the enemy of the good. There’s almost certainly a better way to find mentors, but this would be far better than not doing anything, so I’d say that if you can’t find an actionable better option within (let’s say) a month, you should just do it. Or just do it now and replace with better method when you find one.
In that particular case, I would have chosen different names that likely would have resonated better, but felt it was important not to change the paper’s chosen labels, even though they seemed not great. That might have been an error.
Their explanation is that the question is, will the weaker candidates concede that they are weaker than strong ones and let the strong ones all win, or will they challenge the stronger candidates.
Suggestions for other ways to make this more clear are appreciated. I’d like to be able to write things like this in a way that people actually read and benefit from.