Nate Soares’ Life Advice

Disclaimer: Nate gave me some life advice at EA Global; I thought it was pretty good, but it may or may not be useful for other people. If you think any of this would be actively harmful for you to apply, you probably shouldn’t.

Notice subtle things in yourself

This includes noticing things like confusion, frustration, dissatisfaction, enjoyment, etc. For instance, if you’re having a conversation with somebody and they’re annoying you, it’s useful to notice that you’re getting a little frustrated before the situation gets worse.

A few weeks ago my colleagues and I wanted to do something fun, and decided to play laser tag at our workplace. However, we couldn’t find the laser tag guns. As I began to comb the grounds for the guns for the second time I noticed that I felt like I was just going through the motions, and didn’t really expect my search to be fruitful. At this point I stopped and thought about the problem, and realized that I had artificially constrained the solution space to things that would result in us playing laser tag at the office, rather than things that would result in us having fun. So I stopped looking for the guns and we did an escape room instead, which made for a vastly more enjoyable evening.

If you’re not yet at the point where you can notice unsubtle things in yourself, you can start by working on that and move up from there.

Keep doing the best thing, even if you don’t have a legible story for why it’s good

Certainly the actions you’re taking should make sense to you, but your reasoning doesn’t have to be 100% articulable, and you don’t need to justify yourself in an airtight way. Some things are easier to argue than other things, but this is not equivalent to being more correct. For instance, I’m doing AI alignment stuff, and I have the option of reading either a textbook on linear algebra or E.T. Jaynes’ probability theory textbook.

Reading about linear algebra is very easy to justify in a way that can’t really be disputed; it’s just obviously true that linear algebra is directly and widely applicable to ML. It’s harder to justify reading Jaynes to the same level, even though I think it’s a pretty sound thing to do (I think I will become better at modeling the world, learn about various statistical pitfalls, absorb Jaynes’ philosophical and historical insights, etc.), and in fact a better use of my time right now than learning linear algebra in more depth.

This bit of advice is mostly about not needing to be able to justify yourself to other people (e.g. friends, family) to take the best visible action. However, it is also the case that you might have internalized social pressure such that you feel the need to justify a course of action to yourself in a way that would be legible to other people/​justifiable in a social setting. This is also unnecessary.

Relatedly, you don’t need to “get” motivation; you can just continue to take the best action you can see.

Don’t go insane

Apparently a good number of people in Nate’s social circle have gone insane—specifically, they have taken facts about the world (e.g. the universal prior being malign) as “invitations” to go insane. He also noted that many of these people took LSD prior to going insane, and that this may have “loosened” something in their minds.

This may be a particular danger for people who value taking ideas seriously as a virtue, because they might go full throttle on an idea that conflicts with common sense, and end up insane as a result. When asking a non-Nate for feedback on this post, I was told that some concrete things that people have taken as “invitations” to go insane are: decision theory (specifically acausal trade), things thought while meditating, and the idea that minds are made of “parts” (e.g. sub-agents).

Nate says that the way you avoid this pitfall is that when you hear the “siren call of insanity,” you choose to stay sane instead. This seems vaguely reasonable to me, but it’s not very crisp in my mind and I don’t quite know what it looks like to apply this in practice.

Reject false dichotomies

Don’t epistemically commit to the best option you can currently see, especially not in a way that would permanently alter you/​prevent you from backtracking later. For instance, if the only two moral philosophies you’re aware of are Christianity and nihilism, and you decide that God doesn’t actually exist (and therefore Christianity is obviously wrong), you don’t have to go full throttle down the nihilism path.

Don’t throw the baby out with the bathwater—in fact, don’t lose any part of the baby. If all the epistemic options seem to violate something important to you, don’t just blast through that part of your values. Apparently this helps with not going insane.

Research advice

Nate told me that the most important skill for doing research is not thinking you know things when you actually don’t. This is closely tied to noticing confusion. It’s also related to “learning (important) things carefully”; for instance, if you’re teaching yourself physics, you want to make sure you truly understand the material you’re learning, and move at a pace such that you can do that (rather than going through it quickly but haphazardly).