You should ignore the news unless it’s of historic import. Russia’s invasion of Ukraine constitutes an event of historic import.
One could argue for an even stronger position: you should ignore the news unless it 1) affects you and 2) there is something that you could do about it. I’m trying to think about whether 1 and 2 are true. Like most of us, I have some thoughts, but ultimately I’m not a geopolitics person and don’t really know what I’m talking about. And so, this post is a request for comments, not an authoritative write-up.
May as well start now
Suppose we have the best case scenario: the war ends, tensions disappear, and we all go back to our lives. How long will that last? How long until tensions get serious again? 1 year? 5? 10? 25? 50? 100?
I lean towards the earlier end of that spectrum. 80,000 Hours would “guess the chance of a nuclear war is 2-20% in the next 200 years”. Using that as a jumping off point, the chance of tensions developing enough where the threat is notable seems like it’d be a lot higher, especially given the current invasion.
Even if your estimate is towards the later end of this spectrum, it still seems like the sort of thing we will need to deal with in our lifetimes at some point. So then, any effort spent educating oneself and preparing right now probably won’t be wasted. It’s like buying honey for your pantry: it doesn’t expire, and you know you will use eventually.
Decide what level of risk you are ok with ahead of time
There has been some talk about this in the context of covid. That you should decide ahead of time at what level of case counts you’d be ok with returning to eg. outdoor dining? What level for indoor dining? What level for large indoor gatherings?
Because if you don’t, you risk some sort of status quo bias. And similarly, you risk the boiling frog thing happening to you. The case counts just slowly get lower and lower and lower, but each change is too small/gradual to get you to take action.
I think a similar principle makes sense in the context of nuclear risk. I could see tensions escalating, and escalating, and escalating, and escalating, but each escalation seems too small to justify an action like moving from the city to the country. To guard against this, I think it’d make sense to have some, preferably quantitative, idea in your mind about when you would take various actions.
What can an individual actually do to lower their risk?
I don’t have a great grasp on this question, but I can think of a few things.
Moving to a remote location
I’m thinking of remote islands like Tristan da Cunha that are aparently inhabitable and sustainable. Making a move like this would be a pretty extreme option, so the risk would have to justify it. But it’s hard for me to imagine a remote location like this getting hit by a nuke. And who knows, if you look on the bright side, maybe it’d be a good change of pace and life experience.
Moving to a different country
What I have in mind here are places like Iceland and South Africa. It wouldn’t be quite as safe as living in a remote location, but it seems to me that countries like these are significantly safer than places like the US and western Europe.
Moving to the countryside
As an example, I live in Portland, Oregon right now. Which is a city. If I moved out east to the countryside, or even to a place like Bend, OR, judging by maps like these and these, I’d be a lot safer, because Portland is more of a target. This probably wouldn’t be as safe as moving to a different country, but moving to a different country would be a lot more inconvenient.
Building a bomb shelter
Imagine that you are interested in self-defense. You could learn how to punch and kick and wrestle and fight. That would improve your chances if you did end up in an altercation. But y’know what would really improve your chances? Avoiding the altercation in the first place!
That is the analogy that comes to my mind when I think about bomb shelters. They feel to me like learning to punch and kick. I suppose it’s not a bad idea and it might prove useful, but it also seems like you’d get a lot more mileage out of avoiding the situation to begin with.
Buy useful items
I only looked into this very briefly.
Hazmat suits are something I thought of, but it seems that the bigger risk is from ingesting dangerous particles. Particles that touch your skin aren’t as big a deal, and normal clothing that covers your skin will probably suffice. Also these suits aren’t really enough to protect you from radiation to a significant extent.
Masks might be somewhat helpful. My initial impression is that I’m skeptical of their usefulness, but I’m not sure.
Some rough EV math
Let’s try to look at the question of how much different levels of risk “costs”.
Take a look at this graph. The top row is how likely you are to get attacked. The left column is how likely you are to die, assuming you got attacked. The resulting values assume that you value life at the standard $10M, and show how much the risk of getting attacked “costs”.
There’s a lot more to say about calculating the costs and factoring various things in, but the goal here is to just keep it simple and get a rough picture.
How would you use this information? Well, think of it like this. Maybe you estimate that moving to Greenland moves you from
(50%, 1/10) to
(90%, 1/1k). Because tensions escalated from where they are as of 2/28/22, let’s suppose. According to the table, you’d be “saving” $491k by making that move. So you have to ask yourself whether the inconvenience is worth that amount of money. Seems like something of a close call.
Personally, I place a much higher value on life than that standard $10M value. I elaborate on it here, but the value I use is $10B, not $10M. Let’s see what the graph looks like if we use that as our value on life. Actually, for easy reference, let’s do this:
Value of life: $10M
Value of life: $100M
Value of life: $1B
Value of life: $10B
Focusing on what matters
So far, there are a few distinct questions I’ve laid out:
How much do you value life?
How much do you value not having to move to X (or take whatever other action)?
How likely is it that you get attacked?
How likely is it that you die in an attack?
The answers to #1 and #2 are personal . The answers to #3 and #4 are… hard to figure out.
That’s something I’d like to comment on, actually. It’s been said before, but when things are hard to figure out, you still have to make your best guess and go with it. It’s tempting to think “this is too hard, I don’t know how to think about or estimate this”. But what’s next? Go on with your life? Follow the herd? Each of those is an action. The reason to take an action is because it is your best guess as to what is optimal. Maybe your best guess is to continue with your life or follow the herd, but you should take those actions because it is your best guess, not because it is the default.
Anyways. I think questions #3 and #4 are the ones that are most relevant, in a sense. Those are the questions that are “moving”. Your answers to #1 and #2 are what they are. They’re important, but they’re static, so once you have your answers you don’t need to continue thinking about them.
On the other hand, your answers to #3 and #4 will change as the situation (d)evolves. And as they change, it might become worth taking various actions, like moving to Iceland. Again, I usually try to ignore the news, but this is an exception. This is a situation where the news actually might affect you in such a way that is important and where you can do something about it.
Estimating the risk
One approach for this would be to think about it from first principles. Educate oneself. Have conversations with friends and fellow rationalists. Iterate. Etc.
This doesn’t seem like a very fruitful approach to me. I’m not sure how to articlate why, exactly, but it just feels like the sort of situation where approach #2 of trusting the experts would make more sense.
One way to think about it is rationality skills vs domain specific knowledge. It feels like the sort of thing that’d require a lot of domain specific knowledge, knowledge that takes years and years and years to accumulate. Rationality skills are certainly important, but I’d expect there to be smart people in the field with something like an 8⁄10 in rationality skills and 10⁄10 in domain specific knowledge, and that seems like it’d win out over a 10⁄10 in rationality skills and 4⁄10 in domain specific knowledge.
I might be wrong though, and this is a very important (and interesting!) question. My confidence in approach #2 over approach #1 here is maybe something like 70-80%. More qualitatively, I have a decent feeling about it, but wouldn’t feel particularly surprised if I was wrong.
Also, to be clear, I’m not trying to say that it is an either or sort of thing. In reality it makes sense to incorporate various sources of information and opinion, including ones own gears level understanding. I’m moreso trying to pose the question of which direction it makes sense to lean, and how strong we should be leaning in that direction. My sense of the answer to that question is that we should lean in the direction of domain experts moderately strongly, but also keep an eye on smart people who aren’t in the field have to say. And on markets like Metaculus.
Finding the right experts
Let’s suppose we are doing what I just described and leaning towards approach #2. I think the first step there is to find the right experts to follow.
I want to be clear, in advocating for approach #2 over approach #1, I’m not saying that any ol’ expert will do. And I’m certinly not saying to just blindly listen to and follow what various governmental organizations tell you. Following Zvi’s commentary on covid over the years (wow, plural), I’ve began putting a pretty small amount of weight behind what they say. What I am advocating for is finding some experts who seem particularly dependable, and following them.
How does one do that? Good question. It’s a very practical question in this situation, and a very interesting question more generally.
My instinct says to find some PhD students to talk to. Hop on a call, get a feel for the landscape, and iterate from there. They’re more likely to talk to you than actual professors, and are smarter than undergrads. Or maybe it’s the signal-to-noise ratio. I’m sure there are some great undergrads out there.
I also want to note that the word “some” is plural. I think it’s important to hear from multiple sources. Once you hear the same thing being advocated for from various directions, it’s usually a pretty good sign.
Another approach is to use one’s network. Ask friends, family and coworkers if they know anyone smart in the geopolitics space. And that’s just one degree of separation away from you. From there you can iterate. Continuing the processes, and traversing the social network.
Similarly, we have each other! LessWrong! We’re a network of people. A community. There’s thousands of us. We’re probably relatively well connected. Focused on the tech industry, sure, but there’s gotta be geopolitics people out there. Or people who know geopolitics people. If that’s you, please speak up! :)
On the other hand, perhaps there is currently a place in the rationalist community where high quality conversation (on questions #3 and #4) is already happening. Anyone know? I’m not seeing much on https://www.lesswrong.com. If it is private, I’d appreciate a DM.
The unit of caring
I’ve been talking about this idea of finding experts to help us with questions #3 and #4, but it might also be helpful to find experts who can address questions of what actions are even valuable in the first place. Maybe moving to Greenland isn’t actually helpful. Or maybe it is helpful but building a bunker or something would be more convenient and just as helpful. Maybe there are important actions I’m not aware of.
I spent some amount of hours googling around for this stuff, but I wasn’t very happy with the quality of content I came across. It was a lot of clickbait-y posts, amateurish blogs, and blogs from wacky-seeming people.
Again, this post is a request for comments. Hopefully I said a few useful things, but I’m not too optimistic that I have. My main goal here is to start a conversation and continue the process of figuring this out together, with a focus on what actions would be instrumentally useful for us to take.
In the software community, Requests for Comments (RFCs) are a thing. But on the spectrum of “quick initial conversation starter” to “authoritative thing”, RFCs that you see in the software community fall a lot closer to the latter than this post does, from what I understand. ↩︎
If you have a family, for example, there are multiple lives in question. Still, I think the goal here is just to get within an order of magnitude (or two), so the extra lives probably aren’t too relevant. But one thing to consider is, eg. if you’re an xrisk researcher, you are having a large positive impact on many other lives, and by you dying that impact would be lost, so you might want to incorporate that. ↩︎
Well, I certainly have things to say about the first question. And I also have things to say about the second. But for the purposes of this post, I think we can just call them personal and move on. ↩︎
I don’t think I did a very good job of making this point. Oh well. ↩︎