Appreciate the comment even if you disliked the post! Here are some responses to various bullet points in a kind of random order that made sense to me:
The post highlights the market for lemons model, but then the examples keep not fitting the lemons setup. Covid misinformation wasn’t an adverse selection problem, nor was having spies in the government, nor was the Madman Theory situation.
The market for lemons model is really just an extremely simple model of a single-dimensional adversarial information environment. It’s so extremely simple that it’s hard to fit to any real world situation, since it’s really just a single dimension of price signals. As you add more dimensions of potential deception things get more complex, and that’s when then paranoia stuff becomes more useful.
I think COVID misinformation fits the lemon’s market situation pretty well though of course not perfectly. Happy to argue about it. Information markets are a bit confusing because marginal costs and marginal willingness to pay are both very low, but I do think that at least for informed observers, most peaches (i.e. high-quality information sources) ended up being priced out by low-quality lemons and this created a race to the bottom (among various other things that were going on).
e.g. that’s not how concerns about surfaces played out
Also happy to argue about this. I feel pretty confident that the right model of why many people kept believing in surface transmission was roughly that early on it wasn’t ruled out, and it happened to be that there was a bunch of costly signaling you could do by wiping down lots of surfaces, and then those costly signals later on created strong pressure to rationalize the signals, even when the evidence was relatively clear.
A sacrifice to the Gods (post of this topic to be linked in when finally written) is an action with physical costs but with no interest in any meaningful physical benefits, taken in the hope that it will make one less blameworthy. Things are bad because we have sinned. The Gods demand sacrifice. If we do not act appropriately repentant and concerned, things will surely get worse.
Once we act appropriately, we are virtuous and will doubtless be saved. We can stop. There is no need to proceed in a way that would actually work, once the Gods have been placated. Everything will work out.
If you don’t make the proper sacrifices, then anything that goes wrong means it’s your fault. Or at least, you’ll always worry it is your fault. As will others. If you do make the proper sacrifices, nothing is your fault. Much better.
If the action is efficient and actually were to solve the problem in a meaningful way, that would invalidate the whole operation. You can either show you are righteous and trust in the Gods, or you go about actually solving the problem. For obvious reasons, you can’t do both.
A steelman of this is that Complexity is Bad and nuance impossible. If we start doing things based on whether they make sense that sets a terrible example and most people will be hopelessly lost.
Thus, we sanitize packages.
[...]
Surfaces Are Mostly Harmless
Early on, it made sense to be paranoid about surfaces. It was established that the virus could ‘survive’ for various periods of time. So if you want to be ‘safe’ you need to clean in some form, or wait that period of time. That reduces the risk to almost zero, if done properly.
Absent that, we are sent into a constant frenzy of ‘deep cleaning’ and viewing surfaces as deadly weapons that infect anyone they touch. Jobs are mentally ranked largely by the number of surfaces they require people to touch, and economic activity prevented if too many surfaces might be involved.
That level of paranoia might continue to make sense if this was ‘if one zombie slips past the line everyone dies.’ The precautionary principle is a thing. That’s not what we’re dealing with.
It’s been months. We don’t have concrete examples of infection via surfaces. At all. It increasingly seems like while such a route is possible, and must occasionally happen, getting enough virus to cause an infection, in a live state, via this route, is very hard. When you wash your hands and don’t touch your face, it’s even harder than that.
Meanwhile, those who refuse to touch surfaces like a pizza delivery box end up in more crowded locations like grocery stores, resulting in orders of magnitude more overall risk.
And yet, despite being this certain, it’s damn hard to stop sanitizing packages. And it’s even harder to be this forceful in writing. Because what will happen if I don’t make the sacrifices?
You can disagree with this narrative or explanation of course, but it’s definitely what I believe and something I thought a lot about!
nor was having spies in the government
I definitely agree that government spies do not fit the lemon’s market example particularly well, or like, only in as much as any adversarial information environment fits the model at all. I didn’t mean to imply otherwise! I don’t have a great formal model of the spies situation. Maybe there is one out there.
The way you talk about the 3 strategies I get the sense that you’re saying: when you’re in an adversarial information scenario here are 3 options to consider. But in the examples of each strategy, the other 2 strategies don’t really make sense. They are structurally different scenarios.
Strong disagree. I think combining all three is indeed the usual course of action! I think paranoid people in adversarial information environments basically always do all three. When I have worked at dysfunctional workplaces I have done all three, and I have seen other people do all three in other places that induced paranoia.
I actually mostly had trouble finding examples where someone was doing largely only one of these things instead of generally engaging in a whole big jumple of paranoia which I think can be decomposed into roughly these three strategies, but are hard to disentangle.
Like, IDK, think about your usual and normal depiction of a paranoid person in fiction (referring here to fiction because no immediate real-life reference we would both be intimately familiar enough to make the point comes to mind). I think you will find them doing all three of “becoming suspicious of large swaths of information and dismissing them for being biased”, “becoming vindictive, self-isolating and engaging in purges” and “becoming erratic and hard to predict”. IMO it’s clearly a package!
The Madman Theory example doesn’t feel like an example of the Nixon administration being in an adversarial information situation or being paranoid. It’s trying to make threats in a game theory situation.
I agree I would have liked to motivate the madman example more. This specific example was largely the result of trying to find a case where you can see the strategy I am trying to illustrate more in isolation, with the causal pathway factored out (because in an international diplomacy context you have the part of the enemy trying to model you and predict what you will do and then use that against you, but you have less of the part where much of your information can be manipulated, as you generally have relatively little bandwidth with your adversaries, and you have strong pre-existing group identities that make purges irrelevant, unless you are dealing with a spy-heavy situation).
I am not amazingly happy with it as an example, but I do think the underlying strategy is real!
A thing that I think of as paranoia centrally involves selective skepticism: having some target that you distrust, but potentially being very credulous towards other sources of views on that topic such as an ingroup that also identifies that target as untrustworthy or engaging in confirmatory reasoning about your own speculative theories that don’t match the untrusted target. That’s missing from your theory and your examples.
Huh, this doesn’t really match how I would use the term much at all. I would describe the above as “mindkilled” in rationalist terms, referring to “politics is the mindkiller”, but paranoia to me is centrally invoked by high-bandwidth environments that are hard to escape from, and in which the fear of manipulation is directed at people close and nearby and previously trusted. Indeed, one of the big things I would add to this post, or maybe will write in a separate post sometime this week, is that the most intense paranoia is almost always the result of the violation of what appeared to be long-established trust. Things like a long-term relationship falling apart, or a long-term ally selling out to commercial incentives, or a trusted institution starting to slip and lie and engage in short-term optimization.
If you are in an environment with a shared outgroup that you have relatively little bandwidth to, I think that is an environment with much potential for polarization and various forms of tribal thinking, but I don’t think it’s that strong of a driver of paranoia.
Despite all the examples, there’s a lack of examples that are examples of the core thing—here’s an epistemically adversarial situation, here’s a person being paranoid, here’s what that looks like, here’s how that’s relatively appropriate/understandable even if not optimal
Yeah, I would like a post that has some more evocative core example. IMO the best option here would be a piece of fiction that shows the experience of paranoia from the inside. I do think this is a pretty hard thing to write, and for now largely outside of my range as a writer, though maybe I will try anyways!
I made some attempts at this when drafting this post, but it became pretty quickly clear to me that actually conveying the internal experience here would both take a lot of words and a bunch of skill, and trying to half-ass it would just leave the reader confused. I also considered trying to talk about my personal experiences with paranoia-adjacent strategies during e.g. my time working at CEA, but that seemed if anything more difficult than writing a fictional dialogue.
“there are roughly three big strategies” is a kind of claim that I generally start out skeptical of, and this post failed to back that one up
I would definitely love a better model of whether these really are the exhaustively correct strategies. I have some handwavy pointers to why I roughly think they are, but they are pretty handwavy at this point. Trying to elucidate them a tiny bit right now:
The fundamental issue that paranoia is trying to deal with is the act of an adversary predicting your outputs well-enough that to them, you can basically be treated as part of the environment (in MIRI-adjacent circles I’ve sometimes heard this referred to as “diagonalization”).
If I think about this in a Computer-Scienc-y way, I am imagining a bigger agent that is simulating a smaller agent, with a bunch of input channels that represent the observations the smaller agent makes of the world. Some fraction of those input channels can be controlled. The act of diagonalization is basically finding some set of controllable inputs that, no matter what the uncontrollable parts of the input say, result in the smaller agent doing what the bigger agent wants.
Now, in this context, three strategies stand out to me that conceptually make sense:
You cut off your internal dependence to the controlled input channels
You reduce the amount of information that your adversary has about your internals so they can model your internals less well
You make yourself harder to predict, either by performing complicated computations to determine your actions, or making what kind of computation you perform to arrive at the result highly dependent on input channels you know are definitely uncontrolled
And like… in this very highly simplified CS model, those are roughly the three strategies that make sense to me at all? I can’t think of anything else that makes sense to do, though maybe it’s just a lack of imagination. Like, I feel like you have varied all the variables that make sense to vary in this toy-model.
And of course, it’s really unclear how well this toy-model translates to reality! But it’s one of the big generators that made me think the “3 strategies” claim makes sense.
Curious about your response, but also, this comment is long and you shouldn’t feel pressured to respond. I do broadly like your thinking in the space so would be interested in further engagement!
That helped give me a better sense of where you’re coming from, and more of an impression of what the core thing is that you’re trying to talk about. Especially helpful were the diagonalization model at the end (which I see you have now made into a separate post) and the part about “paranoia to me is centrally invoked by high-bandwidth environments that are hard to escape from” (while gesturing at a few examples, including you at CEA). Also your exchange elsewhere in the comments with Richard.
I still disagree with a lot of what you have to say, and agree with most of my original bullet points (though I’d make some modifications to #2 on your division into three strategies and #6 on selective skepticism). Not sure what the most productive direction is to go from here. I have some temptation to get into a big disagreement covid, where I think I have pretty different models than you do, but that feels like it’s mainly a tangent. Let me instead try to give my own take on the central thing:
The central topic is situations where an adversary may have compromised some of your internal processes. Especially when it’s not straightforward to identify what they’ve compromised, fix your processes, or remove their influence. There’s a more theoretical angle on this which focuses on what are good strategies to use in response to these sorts of situations, potentially even what’s the optimal response to a sufficiently well-specified version of this kind of scenario. And there’s a more empirical angle which focuses on what do people in fact do when they think they might be in this sort of situation, which could include major errors (including errors in identifying what situation you’re in, e.g. how much access/influence/capability the adversary has in relation to you or how adversarial the relationship is) though probably often involves responses that are at least somewhat appropriate.
This is narrower than what you initially described in this post (being in an environment with competent adversaries) but broader than diagonalization (which is the extreme case of having your internal processes compromised, where you are fully pwned). Though possibly this is still too broad, since it seems like you have something more specific in mind (but I don’t think that narrowing the topic to full diagonalization captures what you’re going for).
IMO the best option here would be a piece of fiction that shows the experience of paranoia from the inside.
I’m curious whether you got a chance to read my short story. If you can suspend your disbelief (i.e., resist applying the heuristic which might lead one to write the story off as uncritically polemical), I think you might appreciate :-)
Appreciate the comment even if you disliked the post! Here are some responses to various bullet points in a kind of random order that made sense to me:
The market for lemons model is really just an extremely simple model of a single-dimensional adversarial information environment. It’s so extremely simple that it’s hard to fit to any real world situation, since it’s really just a single dimension of price signals. As you add more dimensions of potential deception things get more complex, and that’s when then paranoia stuff becomes more useful.
I think COVID misinformation fits the lemon’s market situation pretty well though of course not perfectly. Happy to argue about it. Information markets are a bit confusing because marginal costs and marginal willingness to pay are both very low, but I do think that at least for informed observers, most peaches (i.e. high-quality information sources) ended up being priced out by low-quality lemons and this created a race to the bottom (among various other things that were going on).
Also happy to argue about this. I feel pretty confident that the right model of why many people kept believing in surface transmission was roughly that early on it wasn’t ruled out, and it happened to be that there was a bunch of costly signaling you could do by wiping down lots of surfaces, and then those costly signals later on created strong pressure to rationalize the signals, even when the evidence was relatively clear.
Zvi’s COVID model has a bunch of writing on this: https://www.lesswrong.com/posts/P7crAscAzftdE7ffv/covid-19-my-current-model#Sacrifices_To_The_Gods_Are_Demanded_Everywhere
You can disagree with this narrative or explanation of course, but it’s definitely what I believe and something I thought a lot about!
I definitely agree that government spies do not fit the lemon’s market example particularly well, or like, only in as much as any adversarial information environment fits the model at all. I didn’t mean to imply otherwise! I don’t have a great formal model of the spies situation. Maybe there is one out there.
Strong disagree. I think combining all three is indeed the usual course of action! I think paranoid people in adversarial information environments basically always do all three. When I have worked at dysfunctional workplaces I have done all three, and I have seen other people do all three in other places that induced paranoia.
I actually mostly had trouble finding examples where someone was doing largely only one of these things instead of generally engaging in a whole big jumple of paranoia which I think can be decomposed into roughly these three strategies, but are hard to disentangle.
Like, IDK, think about your usual and normal depiction of a paranoid person in fiction (referring here to fiction because no immediate real-life reference we would both be intimately familiar enough to make the point comes to mind). I think you will find them doing all three of “becoming suspicious of large swaths of information and dismissing them for being biased”, “becoming vindictive, self-isolating and engaging in purges” and “becoming erratic and hard to predict”. IMO it’s clearly a package!
I agree I would have liked to motivate the madman example more. This specific example was largely the result of trying to find a case where you can see the strategy I am trying to illustrate more in isolation, with the causal pathway factored out (because in an international diplomacy context you have the part of the enemy trying to model you and predict what you will do and then use that against you, but you have less of the part where much of your information can be manipulated, as you generally have relatively little bandwidth with your adversaries, and you have strong pre-existing group identities that make purges irrelevant, unless you are dealing with a spy-heavy situation).
I am not amazingly happy with it as an example, but I do think the underlying strategy is real!
Huh, this doesn’t really match how I would use the term much at all. I would describe the above as “mindkilled” in rationalist terms, referring to “politics is the mindkiller”, but paranoia to me is centrally invoked by high-bandwidth environments that are hard to escape from, and in which the fear of manipulation is directed at people close and nearby and previously trusted. Indeed, one of the big things I would add to this post, or maybe will write in a separate post sometime this week, is that the most intense paranoia is almost always the result of the violation of what appeared to be long-established trust. Things like a long-term relationship falling apart, or a long-term ally selling out to commercial incentives, or a trusted institution starting to slip and lie and engage in short-term optimization.
If you are in an environment with a shared outgroup that you have relatively little bandwidth to, I think that is an environment with much potential for polarization and various forms of tribal thinking, but I don’t think it’s that strong of a driver of paranoia.
Yeah, I would like a post that has some more evocative core example. IMO the best option here would be a piece of fiction that shows the experience of paranoia from the inside. I do think this is a pretty hard thing to write, and for now largely outside of my range as a writer, though maybe I will try anyways!
I made some attempts at this when drafting this post, but it became pretty quickly clear to me that actually conveying the internal experience here would both take a lot of words and a bunch of skill, and trying to half-ass it would just leave the reader confused. I also considered trying to talk about my personal experiences with paranoia-adjacent strategies during e.g. my time working at CEA, but that seemed if anything more difficult than writing a fictional dialogue.
I would definitely love a better model of whether these really are the exhaustively correct strategies. I have some handwavy pointers to why I roughly think they are, but they are pretty handwavy at this point. Trying to elucidate them a tiny bit right now:
The fundamental issue that paranoia is trying to deal with is the act of an adversary predicting your outputs well-enough that to them, you can basically be treated as part of the environment (in MIRI-adjacent circles I’ve sometimes heard this referred to as “diagonalization”).
If I think about this in a Computer-Scienc-y way, I am imagining a bigger agent that is simulating a smaller agent, with a bunch of input channels that represent the observations the smaller agent makes of the world. Some fraction of those input channels can be controlled. The act of diagonalization is basically finding some set of controllable inputs that, no matter what the uncontrollable parts of the input say, result in the smaller agent doing what the bigger agent wants.
Now, in this context, three strategies stand out to me that conceptually make sense:
You cut off your internal dependence to the controlled input channels
You reduce the amount of information that your adversary has about your internals so they can model your internals less well
You make yourself harder to predict, either by performing complicated computations to determine your actions, or making what kind of computation you perform to arrive at the result highly dependent on input channels you know are definitely uncontrolled
And like… in this very highly simplified CS model, those are roughly the three strategies that make sense to me at all? I can’t think of anything else that makes sense to do, though maybe it’s just a lack of imagination. Like, I feel like you have varied all the variables that make sense to vary in this toy-model.
And of course, it’s really unclear how well this toy-model translates to reality! But it’s one of the big generators that made me think the “3 strategies” claim makes sense.
Curious about your response, but also, this comment is long and you shouldn’t feel pressured to respond. I do broadly like your thinking in the space so would be interested in further engagement!
That helped give me a better sense of where you’re coming from, and more of an impression of what the core thing is that you’re trying to talk about. Especially helpful were the diagonalization model at the end (which I see you have now made into a separate post) and the part about “paranoia to me is centrally invoked by high-bandwidth environments that are hard to escape from” (while gesturing at a few examples, including you at CEA). Also your exchange elsewhere in the comments with Richard.
I still disagree with a lot of what you have to say, and agree with most of my original bullet points (though I’d make some modifications to #2 on your division into three strategies and #6 on selective skepticism). Not sure what the most productive direction is to go from here. I have some temptation to get into a big disagreement covid, where I think I have pretty different models than you do, but that feels like it’s mainly a tangent. Let me instead try to give my own take on the central thing:
The central topic is situations where an adversary may have compromised some of your internal processes. Especially when it’s not straightforward to identify what they’ve compromised, fix your processes, or remove their influence. There’s a more theoretical angle on this which focuses on what are good strategies to use in response to these sorts of situations, potentially even what’s the optimal response to a sufficiently well-specified version of this kind of scenario. And there’s a more empirical angle which focuses on what do people in fact do when they think they might be in this sort of situation, which could include major errors (including errors in identifying what situation you’re in, e.g. how much access/influence/capability the adversary has in relation to you or how adversarial the relationship is) though probably often involves responses that are at least somewhat appropriate.
This is narrower than what you initially described in this post (being in an environment with competent adversaries) but broader than diagonalization (which is the extreme case of having your internal processes compromised, where you are fully pwned). Though possibly this is still too broad, since it seems like you have something more specific in mind (but I don’t think that narrowing the topic to full diagonalization captures what you’re going for).
I’m curious whether you got a chance to read my short story. If you can suspend your disbelief (i.e., resist applying the heuristic which might lead one to write the story off as uncritically polemical), I think you might appreciate :-)
I also have a short story about (some aspects of) paranoia from the inside.
You’re a beautiful writer :) If you ever decide to host a writing workshop, I’d love to connect/attend.