One year later, I am pretty happy with this post, and I still refer to it fairly often, both for the overall frame and for the specifics about how AI might be relevant.
I think it was a proper attempt at macrostrategy, in the sense of trying to give a highly compressed but still useful way to think about the entire arc of reality. And I’ve been glad to see more work in that area since this post was published.
I am of course pretty biased here, but I’d be excited to see folks consider this.
To my mind, what this post did was clarify a kind of subtle, implicit blind spot in a lot of AI risk thinking. I think this was inextricably linked to the writing itself leaning into a form of beauty that doesn’t tend to crop up much around these parts. And though the piece draws a lot of it back to Yudkowsky, I think the absence of green much wider than him and in many ways he’s not the worst offender.
It’s hard to accurately compress the insights: the piece itself draws a lot on soft metaphor and on explaining what green is not. But personally it made me realise that the posture I and others tend to adopt when thinking about superintelligence and the arc of civilisation has a tendency to shut out some pretty deep intuitions that are particularly hard to translate into forceful argument. Even if I can’t easily say what those are, I can now at least point to it in conversation by saying there’s some kind of green thing missing.