Note that this article isn’t included in the latest edition of Rationality: AI to Zombies, for roughly the reasons listed here (if I remember correctly).
I don’t think the “Copybook headings” are a direct reference to truth. Some random googlings suggest that the following is a representative example of those copybook headings, which seem more to me like proverbs and references to old wisdom, than to some core concept of truth:
“Eternal vigilance is the price of success.”
“If wishes were horses then beggars would ride.”
“All is not gold that glitters.”
“Well begun is half done.”
I do think the poem works well for the point you are trying to make, but figured I would provide a bit of context.
(That was indeed the piece that crystallized this intuition for me, and I think Ray got this broader concept from me)
You can order the comments by oldest first, which gives you at least some of that.
We do also record when all the votes are cast, so a timemachine is possible, though querying and aggregating all the votes for a large thread might be too much for a browser client.
Interesting. Do you have a link to the document that sparked this thought?
I am quite glad you posted it, and don’t think that comment should discourage you from posting more similar things.
In general I am very excited to see more conversations being written up as transcripts and posted online, and would be really sad if this would prevent that trend from taking hold more.
Ok, let me give it a try. I am trying to not spend too much time on this, so I prefer to start with a rough draft and see whether there is anything interesting here before I write a massive essay.
You say the following:
Do chakras exist?
In some sense I might be missing the point since the answer to this is basically just “no”. Though obviously I still think they form a meaningful category of something, but in my model they form a meaningful category of “mental experiences” and “mental procedures”, and definitely not a meaningful category of real atom-like things in the external world.
Another way might be that you think chakras do not literally exist like planes do, but you can make a predictive profit by pretending that they do exist
I don’t think the epistemically healthy thing is to pretend that they exist as some external force. Here is an analogy that I think kind of explains the ideas of “auras”, which is a broader set than just chakras:
Imagine you are talking to a chessmaster who has played 20000 hours of chess. You show him a position and he responds with “Oh, black is really open on the right”. You ask “what do you mean by ‘open on the right’?”. He says: “Black’s defense on the right is really weak, I could push through that immediately if I wanted to”, while making the motion of picking up a piece with his right hand and pushing it through the right side of black’s board.
As you poke him more, his sense of “openness” will probably correspond to lots of proprioceptive experiences like “weak”, “fragile”, “strong”, “forceful”, “smashing”, “soft”, etc.
Now, I think it would be accurate to describe (in buddhist/spiritual terms) the experience of the chessmaster as reading an “aura” off the chessboard. It’s useful to describe it as such because a lot of its mental representation is cached out in the same attributes that people and physical objects in general have, even though its referent is the state of some chess-game, which obviously doesn’t have those attributes straightforwardly.
My read of what the deal with “chakras” is, is that it’s basically trying to talk about the proprioceptive subsets of many mental representations. So in thinking about something like a chessboard, you can better understand your own mental models of it, by getting a sense of what the natural clusters of proprioceptive experiences are that tend to correlate with certain attributes of models (like how feeling vulnerable around your stomach corresponds to a concept of openness in a chess position).
You can also apply them to other people, and try to understand what other people are experiencing by trying to read their body-language, which gives you evidence about the proprioceptive experiences that their current thoughts are causing (which tend to feed back into body-language), which allows you to make better inferences about their mental state.
I haven’t actually looked much into whether the usual set of chakras tend to be particularly good categories for the relationship between proprioceptive experiences and model attributes, so I can’t speak much about that. But it seems clear that there are likely some natural categories here, and referring to them as “chakras” seems fine to me.
The identification of the pain-pleasure axis as the primary source of value (Bentham).
I will mark that I think this is wrong, and if anything I would describe it as a philosophical dead-end. Complexity of value and all of that. So listing it as a philosophical achievement seems backwards to me.
I am confused, like obviously my thoughts cause some changes in behavior. Maybe not immediately (though I am highly dubious of the whole “you can predict my actions before they are mentally conscious bit”), but definitely in the future (by causing some kind of back-propagation of updates that change my future actions).
The opposite would make no sense from an evolutionary adaptiveness perspective (having a whole system-2 like thingy would be a giant waste of energy if it never caused any change in actions), and doesn’t at all correspond to high-level planning actions, isn’t what the whole literature on S1 and S2 says (which does indeed make the case that S2 determines many actions), and doesn’t correspond well to my internal experience.
I do think that I tend to update downwards on the likelihood of a piece being true if it seems to have obvious alternative generators for how it was constructed that are unlikely to be very truth tracking. Obvious examples here are advertisements and political campaign speeches.
I do think in that sense I think it’s reasonable to distrust pieces of writing that seem like they are part of some broader conflict, and as such are unlikely to be generated in anything close to an unbiased way. A lot of conflict-theory-heavy pieces tend to be part of some conflict, since accusing your enemies of being evil is memetic warfare 101.
I am not sure (yet) what the norms for discussion around these kinds of updates should be though, but did want to bring up that there exist some valid bayesian inferences here.
This has been my default reference for the past few years:
It’s from 2016, so I don’t actually know where things are right now. But presumably not that much has changed.
The cost of doing so has an effect on productivity (due to nutritional effects, but also effects on attention and general hassle, as well as coordination costs), and using a fraction of that additional productivity to help animals results in a much larger reduction in net animal suffering (because of the abundance of easy opportunities for helping animals, due to the horrible state of animal lives).
My general takeaway from that post was that in terms of psychometric validity, most developmental psychology is quite bad. Did I miss something?
This doesn’t necessarily mean the underlying concepts aren’t real, but I do think that in terms of the quality metrics that psychometrics tends to assess things on, I don’t think the evidence base is very good.
The Open Philanthropy Project created an updated version (I am not a huge fan of it, but itt does have a lot of the things you care about): https://www.openphilanthropy.org/blog/new-web-app-calibration-training
Duplicate of: https://www.lesswrong.com/posts/XzetppcF8BNoDqFBs/help-forecast-study-replication-in-this-social-science
Huh, weird. I will look into it. Just to check, by the green comment thing do you mean the following interaction? (which takes less than a second for me as you can see)
Post-pages also take less than a second for me to load (initially, though there is some JS initialization afterwards), so that’s also confusing:
Might be browser specific, or something else weird going on.
Yeah, bug on our side. Just merging a PR that fixes it. Will be fixed within the day.
Ping us on Intercom (the chat icon in the bottom right corner, and I can help you change it to whatever you want). Sorry for the hassle with the Google login, it’s been on my to-do list for a while to fix that.
This seems right, though something about this still feels confusing to me in a way I can’t yet put into words. Might write a comment at a later point in time.