Copying my comment from Substack:
I don’t have a similarly crisp model for chakras. I don’t think they involve anything that we’d consider supernatural or as requiring exotic physics. I think they do involve emotional expression and issues, expressed in different areas of the body, manipulable in various beneficial ways. But I don’t know what the extent of those ways is, or what all the ways are. Quite possibly there’s more to it than this, that I don’t know about yet.
I think there’s two levels of detail here in the model: the method, and the content. For Tarot, the method is “random access to a library of perspectives that give you the ability to unstick yourself / train thinking” and then the content involves, like, seventy-eight different elements of the distribution, which then might have deeper models of why 78 and why _those_ 78 and so on.
For chakras, I think the method feels nearly similarly crisp to me. Humans have a mostly-but-not-completely shared bodyplan; thus it’s not that surprising if they have a mostly-but-not-completely shared introspective experience of the physiological components of their emotions.
The content involves why seven chakras, why those specific spots, why those specific emotions, etc.; I don’t understand that particularly well (I don’t understand the body particularly well) but it’s not like I could generate the Tarot deck from scratch either!
I mean, my take on this is that around two decades ago Eliezer thought AI safety could be an incredibly hard problem, and then spent a lot of time checking, and now has lots of reasons to believe that it is an incredibly hard problem, and those reasons are spelled out elsewhere, with this post just trying to point at the problem of irretrievability.