Sunset at Noon

A meandering series of vignettes.

I have a sense that I’ve halfway finished a journey. I expect this essay to be most useful to people similarly-shaped-to-me, who are also undergoing that journey and could use some reassurance that there’s an actual destination worth striving for.

  1. Gratitude

  2. Tortoise Skills

  3. Bayesian Wizardry

  4. Noticing Confusion

  5. The World is Literally on Fire...

  6. ...also Metaphorically on Fire

  7. Burning Out

  8. Sunset at Noon

Epistemic Status starts out “true story”, and gets more (but not excessively) speculative with each section.

i. Gratitude

“Rationalists obviously don’t *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. And no one I know has even seriously tried it. Do literally *none* of these people care about their own happiness?”

“Huh. Do *you* keep a gratitude journal?”

“Lol. No, obviously.”

- Some Guy at the Effective Altruism Summit of 2012

Upon hearing the above, I decided to try gratitude journaling. It took me a couple years and a few approaches to get it working.

  1. First, I tried keeping a straightforward journal, but it felt effortful and dumb.

  2. I tried a thing where I wrote a poem about the things I was grateful for, but my mind kept going into “constructing a poem” mode instead of “experience nice things mindfully” mode.

  3. I tried just being mindful without writing anything down. But I’d forget.

  4. I tried writing gratitude letters to people, but it only occasionally felt right to do so. (This came after someone actually wrote me a handwritten gratitude letter, which felt amazing, but it felt a bit forced when I tried it myself.)

  5. I tried doing gratitude before I ate meals, but I ate “real” meals inconsistently so it didn’t take. (Upon reflection, maybe I should have fixed the “not eating real meals” thing?)

But then I stumbled upon something that worked. It was a social habit, which I worry is a bit fragile. I did it together with my girlfriend each night. On nights when one of us travelled, I’d often forget.

But this is the thing that worked. Each night, we share our Grumps and Gratitudes.

Grumps and Gratitudes goes like this:

  1. We share anything we’re annoyed or upset about. (We call this The Grump. Our rule is to not go *searching* for the Grump, simply to let it out if it’s festering so that when we get to the Gratitude we actually appreciate it instead of feeling forced.)

  2. We share three things that we’re grateful for that day. On some bad days this is hard, but we should at least be able to return to old-standbys (“I’m breathing”, “I have you with me”), and we always perform the action of at least *attempting* an effortful search.

  3. Afterwards, pause to actually feel the Grates. Viscerally remember the thing and why it was nice. If we’re straining to feel grateful and had to sort of reach into the bottom of the barrel to find something, we at least try to cultivate a mindset where we fully appreciate that thing.

Maybe the sun just glinted off your coffee cup nicely, and maybe that didn’t stop the insurance company from screwing you over and your best friend from getting angry at you and your boss from firing you today.

But… in all seriousness… in a world whose laws of physics had no reason to make life even possible, a universe mostly full of empty darkness and no clear evidence of alien life out there, where the only intelligent life we know of sometimes likes to play chicken with nuclear arsenals...

...somehow some tiny proteins locked together ever so long ago and life evolved and consciousness evolved and somehow beauty evolved and… and here you are, a meatsack cobbled together by a blind watchmaker, and the sunlight is glinting off that coffee cup, and it’s beautiful.

Over the years, I’ve gained an important related skill: noticing the opportunity to feel gratitude, and mindfully appreciating it.

I started writing this article because of a specific moment: I was sitting in my living room around noon. The sun suddenly filtered in through the window and, and on this particular day it somehow seemed achingly beautiful to me. I stared at it for 5 minutes, happy.

It seemed almost golden, in the Robert Frost sense. Weirdly golden.

It was like a sunset at noon.

(My coffee cup at 12:35pm. Photo does not capture the magic, you had to be there.)

And that might have been the entire essay here—a reminder to maybe cultivate gratitude (because it’s, like, peer reviewed and hopefully hasn’t failed to replicate), and to keep trying even if it doesn’t seem to stick.

But I have a few more things on my mind, and I hope you’ll indulge me.

ii. Tortoise Skills

Recently I read an article about a man living in India, near a desert sand bar. When he was 14 he decided that, every day, he would go there to plant a tree. Over time, those trees started producing seeds of their own. By taking root, they helped change the soil so that other kinds of plants and animals could live there.

Fifteen years later, the desert sandbar had become a forest as large as Central Park.

It’s a cute story. It’s a reminder that small, consistent efforts can add up to something meaningful. It also asks an interesting question:

Is whatever you’re going to do for the next 15 years going to produce something at least as cool as a Central Park sized forest?

(This is not actually the forest in question, it’s the image I could find easily that looked similar that was filed under creative commons. Credited to Your Mildura.)

A Three Percent Incline

A couple months ago, suddenly I noticed that… I had my shit together.

This was in marked contrast to 5 years ago when I decidedly didn’t have my shit together:

  • I struggled to stay focused at work for more than 2 hours at a time.

  • I vaguely felt like I should exercise, but I didn’t.

  • I vaguely felt like I should be doing more productive things with my life, but I didn’t.

  • Most significantly, for the first three years of my involvement with the rationalsphere, I got less happy, more stressed out, and seemed to get worse at thinking. Valley of Bad Rationality indeed.

I absorbed the CFAR mantra of “try things” and “problems can in principle be factored into pieces, understood, and solved.” So I dutifully looked over my problems, and attempted to factor and understand and fix them.

I tried things. Lots of things.

  • I tried various systems and hacks to focus at work.

  • I tried to practice mindfulness.

  • I tried exercising—sometimes maintaining “1 pushup a day” microhabits. Sometimes major “work out at the gym” style things.

  • I tried to understand my desires and bring conflicting goals into alignment so that I wasn’t sabotaging myself.

My life did not especially change. Insofar as it did, it was because I undertook specific projects that I was excited about, which forced me to gain skills.

Years passed.

Somewhere in the middle of this, 2014, Brienne Yudkowsky wrote an essay about Tortoise Skills.

She divided skills into four quadrants, based on whether a skill was *fast* to learn, and how *hard* it was to learn.

LessWrong has (mostly) focused on epiphanies—concepts that might be difficult to get, but once you understand them you pretty much immediately understand them.

CFAR ends up focusing on epiphanies and skills that can be taught in a single weekend, because, well, they only have a single weekend to teach them. Fully gaining these skills takes a lot of practice, but in principle you can learn them in an hour.

There’s some discussion about something you might call Bayesian Wizardry—a combination of deep understanding of probability, decision theory and 5-second reflexes. This seems very hard and takes a long time to see much benefit from.

But there seemed to also be an underrepresented “easy-but-time-consuming” cluster of skills, where the main obstacle was being slow but steady. Brienne went on to chronicle an exploration of deliberate habit acquisition, inspired by a similar project by Malcolm Ocean.

I read Brienne and Malcolm’s works, as well as the book Superhuman by Habit, of which this passage was most helpful to me:

Habits can only be thought of rationally when looked at from a perspective of years or decades. The benefit of a habit isn’t the magnitude of each individual action you take, but the cumulative impact it will have on your life in the long term. It’s through that lens that you must evaluate which habits to pick up, which to drop, and which are worth fighting for when the going gets tough.

Just as it would be better to make 5% interest per year on your financial investments for the rest of your life than 50% interest for one year.… it’s better to maintain a modest life-long habit than to start an extreme habit that can’t be sustained for a single year.

The practical implications of this are twofold.

First, be conservative when sizing your new habits. Rather than say you will run every single day, agree to jog home from the train station every day instead of walk, and do one long run every week.

Second, you should be very scared to fail to execute a habit, even once.

By failing to execute, potentially you’re not just losing a minor bit of progress, but rather threatening the cumulative benefits you’ve accrued by establishing a habit. This is a huge deal and should not be treated lightly. So make your habits relatively easy, but never miss doing them.

Absolutely never skip twice.

I was talking to a friend about a daily habit that I had. He asked me what I did when I missed a day. I told him about some of my strategies and how I tried to avoid missing a day. “What do you do when you miss two days?” he asked.

“I don’t miss two days,” I replied.

Missing two days of a habit is habit suicide. If missing one day reduces your chances of long-term success by a small amount like five percent, missing two days reduces it by forty percent or so.

“Never miss 2 days” was inspirational in a way that most other habit-advice hadn’t been (though this may be specific to me ). It had the “tough but fair coach is yelling at you” thing that some people find valuable, in a way that clearly had my long-term interests at heart.

So I started investing in habit-centric thinking. And it still wasn’t super clear at first that anything good was really happening as a result...

...until suddenly, I looked back at my 5-years-ago-self...

...and noticed that I had my shit together.

It was like I’d been walking for 2 years, and it felt like I’d been walking on a flat, straight line. But in fact, that line had a 3% incline. And after a few years of walking I looked back and noticed I’d climbed to the top of a hill.

(Also as part of the physical exercise thing sometimes I climb literal hills.)

Some specific habits and skills I’ve acquired:

  • I cultivate gratitude, floss, and do a few household chores every single day.

  • I am able to focus at work for 4-6 hours instead of 2-4 (and semi-frequently get into the zone and do a full 8).

  • Instead of “will I exercise at all today?”, the question is more like “will I get around to doing 36 pushups today, or just 12?”

  • I meditate for 5 minutes on most days.

  • I have systems to ensure I get important things done, and a collection of habits that makes sure that important things end up in those systems.

  • I’m much more aware of my internal mental states, and the mental states of people I interact with. I have a sense of what they mean, and what to do when I notice unhealthy patterns.

  • Perhaps most importantly: the habit of trying things that seem like they might be helpful, and occasionally discovering something important, like improv, or like the website

On the macro level, I’m more the sort of person who deliberately sets out to achieve things, and follow through on them. And I’m able to do it while being generally happy, which didn’t use to be the case. (This largely involves being comfortable not pushing myself, and guarding my slack.)

So if you’ve been trying things sporadically, and don’t feel like you’re moving anywhere, I think it’s worth keeping in mind:

  1. Are you aiming for consistency—making sure not to drop the ball on the habits you cultivate, however small?

  2. If you’ve been trying things for a while, and it doesn’t feel like you’re making progress, it’s worth periodically looking back and checking how far you’ve come.

Maybe you haven’t been making progress (which is indeed a warning sign that something isn’t working). But maybe you’ve just been walking at a steady, slight incline.

Have you been climbing a hill? If you were to keep climbing, and you imagine decades of future-yous climbing further at the same rate as you, how far would they go?

iii. Bayesian Wizardry

“What do you most often do instead of thinking? What do you imagine you could do instead?”

- advice a friend of mine got on Facebook, when asking for important things to reflect on during a contemplative retreat.

I could stop the essay here too. And it’d be a fairly coherent “hey guys maybe consider cultivating habits and sticking with them even when it seems hard? You too could be grateful for life and also productive isn’t that cool?”

But there is more climbing to do. So here are some hills I’m currently working on, which I’m finally starting to grok the importance of. And because I’ve seen evidence of 3% inclines yielding real results, I’m more willing to lean into them, even if they seem like they’ll take a while.

I’ve had a few specific mental buckets for “what useful stuff comes out of the rationalsphere,” including:

Epistemic fixes that are practically useful in the shortish term (i.e. noticing when you are ‘arguing for a side’ instead of actually trying to find the truth).

Instrumental techniques, which mostly amount to ‘the empirically valid parts of self-help’ (i.e. Trigger Action Plans).

Deep Research and Bayesian Wizardry (i.e. high quality, in depth thinking that pushes the boundary of human knowledge forward while paying strategic attention to what things matter most, working with limited time and evidence).

Orientation Around Important Things (i.e. once someone has identified something like X-Risk as a crucial research area, people who aren’t interested in specializing their lives around it can still help out with practical aspects, like getting a job as an office manager).

Importantly, it used to seem like Deep Research and Bayesian Wizardry was something other people did. I did not seem smart enough to contribute.

I’m still not sure how much it’s possible for me to contribute—there’s a power law of potential value, and I clearly wouldn’t be in the top tiers even if I dedicated myself fully to it.

But, in the past year, there’s been a zeitgeist initiated by Anna Salamon that being good at thinking seems useful, and if you could only carve out time to actually think (and to practice, improving at it over time) maybe you could actually generate something worthwhile.

So I tried.

Earlier this year I carved out 4 hours to actually think about X-Risk, and I output this blogpost on what to do about AI Safety if you seem like a moderately smart person with no special technical aptitudes.

It wasn’t the most valuable thing in the world, but it’s been cited a few times by people I respect, and I think it was probably the most valuable 4 hours I’ve spent to date.

Problems Worth Solving

I haven’t actually carved out time to think in the same way since then—a giant block of time dedicated to a concrete problem. It may turn out that I used up the low-hanging fruit there, or that it requires a year’s worth of conversations and shower-thoughts in order to build up to it.

But I look at people like Katja Grace—who just sit and actually look at what’s going on with computer hardware, or come up with questions to ask actual AI researchers about what progress they expect. And it seems like there’s a lot of things worth doing that don’t require you to have any weird magic. You should just need to actually think about it, and then actually follow that thinking up with action.

I’ve also talked more with people who do seem to have something like weird magic, and I’ve gotten more of a sense that the magic has gears. It works for comprehensible reasons. I can see how the subskills build into larger skills. I can see the broad shape of how those skills combine into a cohesive source of cognitive power.

A few weeks ago, I was arguing with someone about the relative value of LessWrong (as a conversational locus of quality thinking) versus donating money to other causes. I can’t remember their exact words, but a paraphrase:

It’s approximately as hard to have an impact by donating as by thinking—especially now that the effective altruism funding ecosystem has become more crowded. There are billions of dollars available—the hard part is knowing what to do with them. And often, when the answer is “use them to hire researchers to think about things”, you’re still passing the recursive buck.

Someone has to think. And it’s about as hard to get good at thinking as it is to get rich.

Meanwhile, some other conversations I’ve had with people in the EA, X-Risk and Rationality communities could be combined and summarized as:

We have a lot of people showing up, saying “I want to help.” And the problem is, the thing we most need help with is figuring out what to do. We need people with breadth and depth of understanding, who can look at the big picture and figure out what needs doing

This applies just as much to “office manager” type positions as “theoretical researcher” types.

iv. Noticing Confusion

Brienne has a series of posts on Noticing Things, which is among the most useful, practical writings on epistemic rationality that I’ve read.

It notes:

I suspect that the majority of good epistemic practice is best thought of as cognitive trigger-action plans.

[If I’m afraid of a proposition] → [then I’ll visualize how the world would be and what I would actually do if the proposition were true.]

[If everything seems to hang on a particular word] → [then I’ll taboo that word and its synonyms.]

[If I flinch away from a thought at the edge of peripheral awareness] → [then I’ll focus my attention directly on that thought.]

She later remarks:

I was at first astonished by how often my pesky cognitive mistakes were solved by nothing but skillful use of attention. Now I sort of see what’s going on, and it feels less odd.

What happens to your bad habit of motivated stopping when you train consistent reflective attention to “motivated stopping”? The motivation dissolves under scrutiny...

If you recognize something as a mistake, part of you probably has at least some idea of what to do instead. Indeed, anything besides ignoring the mistake is often a good thing to do instead. So merely noticing when you’re going wrong can be over half the battle.

She goes on to chronicle her own practice at training the art of noticing.

This was helpful to me, and one particular thing I’ve been focusing lately is noticing confusion.

In the Sequences and Methods of Rationality, Eliezer treats “noticing confusion” like a sacred phrase of power, whispered in hushed tones. But for the first 5 or so years of my participation in the rationality community, I didn’t find it that useful.

Confusion Is Near-Invisible

First of all, confusion (at least as I understand Eliezer to use the term) is hard to notice. The phenomenon here is when bits of evidence don’t add up, and you get a subtle sense of wrongness. But then instead of heeding that wrongness and making sense of it, you round the evidence to zero, or you round the situation to the nearest plausible cliché.

Some examples of confusion are simple: CFAR’s epistemic habit checklist describes a person who thought they were supposed to get on a plane on Thursday. They got an email on Tuesday reminding them of their flight “tomorrow.” This seemed odd, but their brain brushed it off as a weird anomaly that didn’t matter.

In this case, noticing confusion is straightforwardly useful—you miss fewer flights.

Some instances are harder. A person is murdered. Circumstantial evidence points on one particular murderer. But there’s a tiny note of discord. The evidence doesn’t quite fit. A jury that’s tired and wants to go home is looking for excuses to get the sentencing over with.

Sometimes it’s harder still: you tell yourself a story about how consciousness works. It feels satisfactory. You have a brief flicker of awareness that your story doesn’t explain consciousness well enough that you could build it from scratch, or discern when a given clump of carbon or silicon atoms would start being able to listen in a way that matters.

In this case, it’s not enough to notice confusion. You have to follow it up with the hard work of resolving it.

You may need to brainstorm ideas, validate hypotheses. To find the answer fastest and most accurately, you may need to not just “remember base rates”, but to actually think about Bayesian probability as you explore those hypotheses with scant evidence to guide you.

Noticing confusion can be a tortoise skill, if you seek out opportunities to practice. But doing something with that confusion requires some wizardry.

(Incidentally: in at least one point earlier in this essay, if I told you you were given the opportunity to practice noticing confusion, could you identify where it was?)

v. The World Is Literally On Fire

I’ve gotten pretty good at noticing when I should have been confused, after the fact.

A couple weeks ago, I was walking around my neighborhood. I smelled smoke.

I said to myself: “huh, weird.” An explanation immediately came to mind—someone was having a barbecue.

I do think this was the most likely explanation given my knowledge at the time. Nonetheless, it is interesting that a day later, when I learned that many nearby towns in California were literally on fire, and the entire world had a haze of smoke drifting through it… I thought back to that “huh, weird.”

Something had felt out of place, and I could have noticed. I’d been living in Suburbia for a month or two and not noticed this smell, and while it probably was a barbecue, something about this felt off.

(When the world’s on fire, the sun pretty unsubtly declares that things are not okay)

Brienne actually look this a step farther in a Facebook thread, paraphrased:

“I notice that I’m confused about the California Wildfires. There are a lot of fires, all across the county. Far enough apart that they can’t have spread organically. Are there often wildfires that spring up at the same time? Is this just coincidence? Do they have a common cause?”

Rather than stop at “notice confusion”, she and people in the thread went on to discuss hypotheses. Strong winds were reported. Were they blowing the fires across the state? That still seemed wrong—the fires were skipping over large areas. Is it because California is in a drought? This explains why it’s possible for lots of fires to abruptly start. But doesn’t explain why they all started today.

The consensus eventually emerged that the fires had been caused by electrical sparks—the common cause was the strong winds, which caused powerlines to go down in multiple locations. And then, California being a dry tinderbox of fuel enabled the fires to catch.

I don’t know if this is the true answer, but my own response, upon learning about the wildfires and seeing the map of where they were, had simply been, “huh.” My curiosity stopped, and I didn’t even attempt to generate hypotheses that adequately explained anything.

There are very few opportunities to practice noticing confusion.

When you notice yourself going “huh, weird” in response to a strange phenomenon… maybe that particular moment isn’t that important. I certainly didn’t change my actions due to understanding what caused the fires. But you are being given a scarce resource—the chance, in the wild, to notice what noticing confusion feels like.

Generating/​evaluating hypotheses can be done in response to artificial puzzles and abstract scenarios, but the initial “huh” is hard to replicate, and I think it’s important to train not just to notice the “huh” but to follow it up with the harder thought processes.

vi. …also, Metaphorically On Fire

It so happened that this was the week that Eliezer published There Is No Fire Alarm for Artificial General Intelligence.

In the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn’t react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time.

The fire alarm doesn’t tell us with certainty that a fire is there. In fact, I can’t recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is weaker evidence of fire than smoke coming from under a door.

But the fire alarm tells us that it’s socially okay to react to the fire. It promises us with certainty that we won’t be embarrassed if we now proceed to exit in an orderly fashion.

In typically Eliezer fashion, this would all be a metaphor for how there’s not ever going to be a moment when it feels socially, professionally safe to be publicly worried about AGI.

Shortly afterwards, Alpha Go Zero was announced to the public.

For the past 6 years, I’ve been reading the arguments about AGI, and they’ve sounded plausible. But most of those arguments have involved a lot of metaphor and it seemed likely that a clever arguer could spin something similarly-convincing but false.

I did a lot of hand wringing, listening to Pat Modesto-like voices in my head. I eventually (about a year ago) decided the arguments were sound enough that I should move from the “think about the problem” to “actually take action” phase.

But it still didn’t really seem like AGI was a real thing. I believed. I didn’t alieve.

Alpha Go Zero changed that, for me. For the first time, the arguments were clear-cut. There was not just theory but concrete evidence that learning algorithms could improve quickly, that architecture could be simplified to yield improvement, that you could go from superhuman to super-super-human in a year.

Intellectually, I’d loosely believed, based on the vague authority of people who seemed smart, that maybe we might all be dead in 15 years.

And for the first time, seeing the gears laid bare, I felt the weight of alief that our civilization might be cut down in its prime.


(Incidentally, a few days later I was at a friends’ house, and we smelled something vaguely like gasoline. Everyone said “huh, weird”, and then turned back to their work. On this particular occasion I said “Guys! We JUST read about fire alarms and how people won’t flee rooms with billowing smoke and CALIFORNIA IS LITERALLY ON FIRE RIGHT NOW. Can we look into this a bit and figure out what’s going on?”

We then examined the room and brainstormed hypotheses and things. On this occasion we did not figure anything out and eventually the smell went away and we shrugged and went back to work. This was not the most symbolically useful anecdote I could have hoped for, but it’s what I got.)

vii. Burning Out

People vary in what they care about, and how they naturally handle that caring. I make no remark on what people should care about.

But if you’re shaped something like me, it may seem like the world is on fire at multiple levels. AI seems around 15% likely to kill everyone in 15 years. If it weren’t, people around the world would still be dying for stupid preventable reasons, and people around the world would still be living but cut off from their potential.

Meanwhile, civilization seems disappointingly dysfunctional in ways that turn stupid, preventable reasons into confusing, intractable ones.

The metaphorical fires I notice range in order-of-magnitude-of-awfulness, but each seems sufficiently alarming that it completely breaks my grim-o-meter and renders it useless.

For three years, the rationality and effective altruism movements made me less happy, more stressed out, in ways that were clearly unsustainable and pointless.

The world is burning, but burning out doesn’t help.

I don’t have a principled take on how to integrate all of that. Some people have techniques that work for them. Me, I’ve just developed crude coping mechanisms of “stop feeling things when they seem overwhelming.”

I do recommend that you guard your slack.

And if personal happiness is a thing you care about, I do recommend cultivating gratitude. Even when it turns out the reason your coffee cup was delightfully golden was that the world was burning.

Do what you think needs doing, but no reason not to be cheerful about it.

viii. Sunset at Noon

Earlier, I noted my coffee cup was beautiful. Weirdly beautiful. Like a sunset at noon.

That is essentially, verbatim, the series of thoughts that passed through my head, giving you approximately as much opportunity to pay attention as I had.

If you noticed that sunsets are not supposed to happen at noon, bonus points to you. If you stopped to hypothesize why, have some more. (I did neither).

Sometimes, apparently, the world is just literally on fire and the sky is covered in ash and the sun is an apocalyptic scareball of death and your coffee cup is pretty.

Sometimes you are lucky enough for this not to matter much, because you live safely a few hours’ drive away, and your friends and the news and all let you know.

Sometimes, maybe you don’t have time for friends to let you know. You’re living an hour away from a wildfire that’s spreading fast. And the difference between escaping alive and asphyxiating is having trained to notice and act on the small note of discord as the thoughts flicker by:

“Huh, weird.”

(To the right: what my coffee cup normally looks like at noon)