[Lecture Club] Awakening from the Meaning Crisis
John Vervaeke has a lecture series on YouTube called Awakening from the Meaning Crisis. I thought it was great, so I’m arranging a lecture club to discuss it here on Less Wrong. The format is simple: each weekday I post a comment that’s a link to the next lecture and the summary (which I plan on stealing from the recap at the beginning of the next lecture), and then sometimes comment beneath it with my own thoughts. If you’re coming late (even years late!) feel free to join in, and go at whatever pace works for you.
(Who is John Vervaeke? He’s a lecturer in cognitive science at the University of Toronto. I hadn’t heard of him before the series, which came highly recommended to me.)
I split the lecture series into three parts: the philosophical, religious, and cultural history of humankind (25 episodes) related to meaning, the cognitive science of wisdom and meaning (20 episodes), and more recent philosophy related to the meaning crisis specifically (5 episodes). Each episode is about an hour at regular speed (but I think they’re understandable at 2x speed). I am not yet aware of a good text version of the lectures; I also have some suspicion that some important content is not in the text itself, and so even if I transcribed them (or paid someone to) it’d still be worth watching or listening to it.
I think the subject matter is 1) very convergent with the sort of rationality people are interested in on LW, and 2) relevant to AI alignment, especially thinking about embedded agency.
Religio/Perennial Problems/Reverse Engineering Enlightenment
- The Meaning Crisis by 2 Apr 2021 1:04 UTC; 18 points) (
- 30 Mar 2021 16:00 UTC; 9 points)'s comment on Rationalism before the Sequences by (
Episode 50: Tillich and Barfield
Sad I missed this in March. Have heard some Vervaeke on Curt Jaimungal’s channel.
Episode 7: Aristotle’s World View and Erich Fromm
The agent-arena relationship is, in my view, one of the core concepts in the course. My version is that you perceive yourself as an ‘agent’, able to ‘take actions’ (often according to some script) in a way that is matched up to perceiving your environment as ‘an arena’ that ‘presents affordances’. Much of familiarizing yourself with a new place or culture or job or so on is learning how to properly understand the agent-arena relationship (“oh, when I want this done, I go over there and push those buttons”). The CFAR taste/shaping class is, I think, about deliberately seeing this happen in your mind. Importantly, basically all actions will ground their meaning in this agent-arena relationship.
One of the things that I think is behind a lot of ‘modern alienation’ is that the arenas are so narrow, detached, and voluntary, in contrast to the arenas perceived by a hunter-gatherer tribesman.
Why is ‘voluntary’ alienating? For example, suppose I’m in a soccer league; I have some role to play, and some satisfaction in how well I play that role, and so on, but at the root of the satisfaction I get from the soccer league is that I chose to participate. There’s not really ‘something bigger than me’ there; I could have decided to be in a frisbee league instead, or play Minecraft, or watch Netflix, or so on and so on. Around me are other people making their own choices, which will generally only line up with my by accident or selection effect. [“Huh, everyone at the soccer league is interested in soccer, and none of my non-league friends are into soccer.”]
The Dragon Army experiment was a study in contrasts, here; 11 people were in the house, and attendance at the mandatory events was generally 11, and attendance at voluntary events was generally 2 or 3. Even among people who had self-selected to live together, overlap in interests was only rarely precise enough that it was better to do something together than doing a more narrowly matched thing alone. But this makes it harder to build deep meaning out of a narrow voluntary arena, when it’s a nearly random choice selected from a massive list of options.
[See also the Gervais Principle, in particular the bit where the Losers value diversity because it allows everyone to be above-average in a way that is only meaningful to them. Shared meaning means conflict over a single ranking, instead of peace between many different rankings. But that’s also how you get Lotus!]
Yeah, i think you hit the nail with your point on voluntary. The thing i hear most often from people who experience a meaning crisis is “Why”—“Why this specifically? Why this and not this other thing? What’s the purpose?”. This also relates to me to Choices are Bad. If you have lots of options it’s much harder to answer this nagging “Why” question. When the possibility space is large you need much more powerful principles to locate the right choice (This also relates to relevance realization).
The process that produces that question about meaning might start out with simply trying to decide what to do, notice the option space is so large that it needs better principles to successfully locate something, then start asking questions about purpose and meaning. The distress is an inability to locate relevance.
My brother used to say that whenever someone started to talk with him about “the meaning of life” he wants to just go to them, give them a really good massage, and ask if the question still bothers them. It of course doesn’t answer or diffuse the question, but it has a point. When they’re getting a massage it’s fairly clear what the right thing to do is, try to focus on the massage and don’t worry about other stuff. It gives them peace from mind.
And I was once able to answer someone that question well enough that it seemed it actually gave her enough clarity and understanding to be peaceful and satisfied. In jargon, my answer gave her the tools to better find what’s relevant (At least if I’m not too optimistic in my interpretation of her response, she also had it pretty easy compared to others who have meaning crises).
I actually answered her in text, so i can share what I wrote (translated from Hebrew). It’s mostly based on ideas from the sequences, and it was before I heard of Vervaeke (I think before these lectures even came out).
 It was this quote from Eric Weinstein: “Don’t be afraid to fool yourself into thinking that life is meaningful and that, against all odds, *you* have an important part to play in the world. If it’s all meaningless you‘ll have done no harm lying to yourself. And if by some chance this matters, you will waste less time.” The principle I distilled from it is that The existence of meaning precedes the importance of truth (I’ll be happy to discuss that one).
Please. I’m not sure what it means, exactly, but I’m interested.
To say something is important is to make some value judgement, and it requires that things already have meaning. So if you say “There’s no meaning. Everything is meaningless”, and I ask “and why do you believe that?”, and you say “because it is true”, and I ask, “but if everything is meaningless, why is it important what the truth is?”, how do you answer without assuming some meaning? How can you justify the importance of anything, including truth, without any meaning?
So if everything is meaningless, you can believe otherwise and nothing bad would happen, even though it’s not the truth, because everything is meaningless (and thus nothing, including truth, can’t be important). If things are meaningful, you can believe they are meaningful, because it’s true. And also, if things are meaningful and you believe otherwise, that may be bad, because truth may indeed be important.
So for things to be important to you (including truth), things first have to be meaningful. Therefore the existence of meaning precedes the importance of truth, and if there’s no meaning then nothing can say you shouldn’t believe otherwise.
p.s: Verveake also said something similar: “Before you assess truth, things have to be meaningful to you”.
Thank you, I believe I understand
Note the possibility of the other sort of modal confusion: trying to meet your having needs through the being mode. (“I am dry on the inside.”)
I think Vervaeke’s position is that this isn’t much of a problem. That is, the higher levels of development also contain the lower levels of development, and so can see and properly situate the having needs and being needs. If you need to eat to not be hungry, and you need to be a good parent, you might go hungry so that your child has enough to eat, or you might not, depending on your best judgment of the situation. If you need to not have drunk hemlock in order to live, and you need to be true to your principles, you might drink hemlock or you might not, depending on your best judgment of the situation.
[I’ve been reading through The Courage to Be by Paul Tillich, which comes up near the end of the lecture series, and the relevant part of his take on religion is that the most important bit is draining the fear of death to make possible regular life (which is everywhere colored by the presence of death). If death is actually infinitely bad, then it doesn’t make sense to get into a car, but how can you live a meaningful life without activities like getting in a car that bear some risk of death?]
But it is still a problem sometimes / you do actually have to use judgment to balance them. A friend of mine, early in the pandemic, was trying to get her community to prepare, and her community responded with something like “you seem like you’re acting out of fear in a way that seems unhealthy,” which I would now characterize as thinking my friend was “focusing on the having-need of safety” instead of “focusing on the being-need of detachment”, or something. I don’t know the full details, but as I understand it they didn’t take sufficient precautions and then COVID spread through the community. (COVID is, of course, in that weird middle zone where this might actually have been fine in retrospect, as I don’t think they had any deaths or long COVID, but I don’t think the reasons they didn’t prepare were sufficiently sensitive to how bad COVID was.)
The modal confusion seems like one of the useful models/concepts Vervaeke shares. I admit I kind of forgot it, but it seemed useful when I first watched the lecture, and it seems useful again now (A large part of why i forgot it could be because i pretty much binged the lectures).
Anyway, your observation is good:
This sounds like what the buddhists did. Instead of trying to fulfill your having desires, become someone who doesn’t desire them (being mode).
This can be both beneficial and harmful. Minimalism is an example where it’s beneficial. You recognize you are being pumped with having needs/desires that can be relinquished, so you become someone who is satisfied with less.
It could be harmful when the having need isn’t a need that should be relinquished, or you become something you shouldn’t. For example, you have a need for companionship, but for some reason it’s difficult for you to get, so you tell yourself that the other sex is awful and you shouldn’t get involved with them. There probably are fine ways to make relinquish that need (monks do that and they seem fine), but when it doesn’t work like in this example we call that denying your needs, and it makes you miserable.
Your having needs stem from what you are, so it makes sense it would be possible to solve them through transforming yourself, but not so much the other way around (or at all?). I need food because I am human, I can solve that either by getting food or becoming something that doesn’t need food (fact check: Can’t. Growth mindset: Yet). But not every change is an improvement, so attempts to become something different can harm you.
What some Buddhists did. :-) While there are branches of Buddhism that take renunciation as the primary goal, there are also those who just consider it one tool among others (e.g.).
Yes, thanks for adding precision to that statement :) I only have a small familiarity with Buddhism.
I do know at least 1 person (...maybe 2, from another “bad childhood” case) who completely lost touch with their ability to detect their own hunger, and had to rely on social conventions to remember to eat.
(This person’s childhood was awful. I think they had been stuck in a lot of situations where they couldn’t satisfy their need for food through the “having” frame. While it might be impossible to not need food, it is possible for someone to adjust to not want or think about food much.)
This person was otherwise incredibly well-adjusted*, but the “no sense of hunger” thing stuck.
Do not recommend, btw. It seems to be something that is very hard to unlearn, once acquired. In the absence of other people, “timers” or “actual wooziness” were the shitty secondary indicators these people came to rely on.
* This one was well-adjusted compared to most people, period.**
** Given what he went through, this struck me as an unusual (but pleasant!) surprise. This person’s life was far more difficult than most. But he seemed to be able to view a lot of his tragedies as statistics, and he still found it worth living. Had an incredible knack for making found-family, which probably helped.
Damm… That sounds terrible. Maybe that’s how it’s possible to die of hunger playing video games? I was always confused when I heard these stories, as I can’t imagine a game being so addicting that i don’t notice I’m hungry (Other option is these stories are somehow exaggerated / not real, I haven’t looked into it).
When I did one meal a day intermittent fasting for sufficiently long (4 months, maybe?), I mostly lost my non-physiological sense of hunger (i.e. I wouldn’t notice that I hadn’t eaten in 30 hours or w/e until I was like “huh, my blood sugar is low”). I think I currently have a weak sense of hunger, which is more frequently lonely mouth than “I forgot to eat” or w/e.
My experience of it is mostly positive? Like, I don’t have much trouble eating lunch every day, and have habituated to eating enough at once to sustain me for a day. [People are often surprised the first time they see me with four mealsquares for a meal :P]
On a little further thought: “weaker sense of hunger” could be fine or beneficial for some people, and negative for others.
But some people don’t seem to be able to undo this change, after doing it. So my advice around it defaults to cautionary, largely for that reason. It’s hard to adjust something intelligently after-the-fact, when you can only move a knob easily in 1 direction. (And from my tiny sliver of anecdatums, I think this might be true for at least 1 of the mental-reconfigurations some people can do in this space.)
P.S. “Lonely mouth” is a VASTLY better term (and framing) than “oral fixation.” Why the hell did Western Culture* let Freud do this sort of thing to the joint-metaphor-space?
* Do we have a canonical term for “the anthro for decentralized language canon” yet?**
** I get the feeling that a fun (and incredibly-stupid) anthropomorphizing metaphor could easily exist here. New words as offerings, that can be accepted or rejected by facets of Memesis. Descriptivist linguists as the mad prophets of a broken God. Prescriptivists and conlang-users as her ex-paladins or reformers, fallen to the temptations of lawfulness and cursed with his displeasure. An incomplete reification for “Language as They Are,” in contrast to the platonic construct of an “Orderly Language that Could Be.”
Episode 2: Flow, Metaphor, and the Axial Revolution
One of the things I really like about this series is the way in which cognition is viewed as this double-edged sword, where it is specifically the things that make it good that also make it bad. The ability to quickly reach conclusions is both what makes intelligence useful—you need less sensory data / less time to decide things—and what makes it problematic—you jump to incorrect conclusions more quickly as well. This is, of course, also my view on AI alignment: the problem is not that people build robots and then foolishly decide to put guns on the robots. The problem is that we only know how to make the first-order cognition, where we know how to make optimizers that search across a wide possibility space for things that maximize some score, with no attention on whether or not they have the right score function. So the robots we build now are very susceptible to illusion and self-deception.
This also feels very tied to the spirit behind Less Wrong: intelligence and rationality are distinct things, where rationality is mostly focusing on the ways in which you personally are subject to illusion and self-deception, and need to rearrange your thinking such that your intelligence is helping you instead of an obstacle.
I didn’t understand the connection he was drawing between causal modelling and flow.
It sounded like he was really down on learning mere correlations, but in nature knowing correlations seems pretty good for being able to make predictions about the world. If you know that purple berries are more likely to be poisonous than red berries, you can start extracting value without needing to understand what the causal connection between being purple and being poisonous is.
I didn’t understand why he thought his conditions for flow (clear information, quick feedback, errors matter) were specifically conducive to making causal models, or distinguishing correlation from causation. Did anyone understand this? He didn’t elaborate at all.
This also shows up in Pearl; I think humans are in a weird situation where they have very simple intuitive machinery for thinking about causation, and very simple formal machinery for thinking about correlation, and so the constant struggle when talking about them is keeping the two distinct.
Like, there’s a correlation between purple berries and feeling ill, and there’s also a correlation between vomiting and feeling ill. Intuitive causal reasoning is the thing that makes you think about “berries → illness” instead of “vomiting <-> illness”.
Try flipping each of the conditions.
Information that is obscure or noisy instead of clear makes it harder to determine causes, because the similarities and differences between things are obscured. If the berries are black and white, it’s very easy to notice relationships; if the berries are #f5429e and #f54242, you might misclassify a bunch of the berries, polluting your dataset.
Feedback that’s slow means you can’t easily confirm or disconfirm hypotheses. If eating one black berry makes you immediately ill, then once you come across that hypothesis you can do a few simple checks. If eating one black berry makes you ill 8-48 hours later, then it’ll be hard to tell whether it was the black berry or something else you ate over that window. If you ate a dozen different things, you now have to run a dozen different (long!) experiments.
If errors are irrelevant, then you’re just going to ignore the information and not end up making any models related to it. The more relevant the errors are, the more of your mental energy you can recruit to modeling the situation.
Why those three, and not others? Idk, this is probably just directed sourced from the literature on flow, where they likely have experiments that look into varying these different conditions and trying out others.
I was thinking that there were groudns to think that flow is an experience of lots of implicit learing but I was much more lost on why flow would be conductive to more. Like if I have a proof streak then there is going to be more fodder for more and more proofs but most of that is going to be irrelevant calculation and dead-ends that don’t lead to theorems. And there is no guarantee of success. At some point what is getting and enabling me the results is going to run out. Success doesn’t by itself generate success.
Assyrian Armies of the Axial-Age: Alphabetical, Arithmetic, and Affluent.
Episode 1: Introduction
There’s a great SSC post, Read History of Philosophy Backwards, which seems relevant to framing the first half of the series. That is, the point of talking about shamans isn’t that it’s better than what we’re doing today, or a direct response to the meaning crisis; the point of looking at shamans is in part to figure out how they worked (both what problems they were solving, and how they were solving them) and in part to figure out what life / society was like before there were any shamans.
I was a bit bugged by the ‘placebo effect’ discussion, mostly because I think he worded things wrong; ‘placebo effect is 30-40% as effective as full medicine’ is different from ‘you do 30-40% better with placebo than nothing’.
What is the difference of the placebo wordings? Are you not including the placebo half into the full medice and consider anythign not part of the chemical medice to not be medicine?
I think the denominators of them are different. The first wording is “(medicine—placebo) / (medicine—no treatment) = 0.65”, whereas the second wording is “(placebo—no treatment) / (no treatment) = 1.35″.
I am still a bit confused. I would read the first as “(treatment-chemical)/treatment = 0.35” and I guess the overall point was. I don’t think the case of secretly injecting peope with chemical was ever referred to or is it a typical experimental setting.
I see the meaning crisis as a function of increased neoteny. Parents provide meaning for children, elders provide meaning for adults. We don’t have village elders and everyone is getting more child-like as they get wealthier.
Huh, I think I agree with lots of components of this, but somehow they’re linked together in a way that seems shaky to me, or like it’s jumping too far too quickly.
Like, yes there’s increased neoteny, and yes there’s increased wealth, but it’s another leap to say that wealth makes people more neotenous. [More likely, from my models, there’s another thing that is causing both, like the increased size and specialization of society. People have to learn longer / have more subservient roles to fit into larger, more complicated organizations, and those larger, more complicated organizations are better at producing goods and services / serving as ‘parents’ for much longer. More peace leads to less trauma which leads to less ‘growing up fast.’]
I appreciated the Foolishness vs Ignorance distinction he drew up in Episode 1.
“Foolishness is lack of wisdom, Ignorance is lack of knowledge” sounds initially trite. But when he drills a little further into it, it became clear that his use of “Foolishness” is trying to gesture at premature pattern-identification and pattern-fixation, with a failure to notice alternative patterns.
“Premature reification” is what I’ve heard Ozzie call something similar, and that’s the handle I most often use for it.
There are probably some types of error that a child wouldn’t make, but an adult would, because adults more readily project one of their pre-existing reifications.
...but also, you need a reification to build things or coordinate. They’re not a thing you want to stop doing, they’re a thing you want to learn to monitor, manage, and question sometimes.
It is good to have some tools to dislodge or rewrite reifications which don’t actually apply. (And this seems to be what he sees as a selling-point of altered states.)
I had no idea that the metaphor “think outside the box” was derived from a math puzzle. That’s pretty cool.
Did you have some conception of where it would have came from and what it was referring to? How can one understand an idiom if one doesn’t understand the constituent parts?
Would you still say it’s worth following along with this series?
Yes; I liked it a lot, and haven’t come across a better introduction to this topic. (You might want to read some of the summaries to figure out if you’ll find the topic interesting.)
I am thinking whether wizards and really are shamanistic instances.
In a lot of stories wizards know a lot but are impractical, disinvolved and overtly theorethical. Those aspects don’t really jive with the high wisdom aspects.
It is more that wizard are people that have knowledge that other people do not have. And while it might utilise wisdom to come up with such unusual things being able to receive or wield it doesn’t have so high requirements. Wizards are associated with spellbooks and stuff which is clearly in the domain of sticking with a lot of propositional knowledge.
In particular my mind is that in Dungeos and Dragons, wizards have intelligence as their spellcasting stat while the cleric and monk have wisdom, sorcerers and warlock have charisma. I guess the more general class of “spellcasters” catches more fo the aspects and wizard is like the “default” spellcaster.
When the shaman is donning the deer mask they are essentially playing singleplayer D&D activating the muscle “roleplay”.
Some fo the speelcaster seem to be revolving around some of the actions deemed beneficial. The warlock is about relying in an entity outside of yourself, your patron to get things done, to channel the other. Sorceresser are about enbodying the the improvement. You don’t use the magic, you are the magic. Clerics tap into devotion and how intense focus and elaborate system opens new options.
I had previously watched an episode or two of this, and felt pretty meh about it. It felt like he overpromised and underdelivered, and talked a lot without getting to an actual point. I’m trying it again solely on the strength of your recommendation / it seems like you think there’s a solid payoff if you stick with it.
This is good to know; I’ve seen some people recommend it with “if you get through two lectures and you don’t like it, it’s not for you.” So I’m not sure how strongly you should take my recommendation.
In particular, I think one of the things I liked most about it was seeing a thing I’m already deeply familiar with / interested in (rationality / how to orient one’s life) from a new angle. The “history of philosophy as seen by a cognitive scientist” sounds way more interesting to me than “history of philosophy as seen by a philosopher”, or something similar; it might or might not sound interesting to you.
That said, I think there’s a thing going on with ‘underdelivery’, where the lecture is much more “these are the problems meditation is trying to solve, and this is why you might expect meditation to solve them” (with an ecosystem of practices, rather than just meditation), but listening to the lecture doesn’t make you a skilled meditator; you have to actually meditate if you want to solve the problems that meditation solves. [You could imagine a similar lecture on physiology, wherein you end up with a knowledge of the history of movement and exercise and a sense of what you need to do—but also, you won’t actually get fit without moving.]
As well, a lot of his points are something like “here’s a phrase that we’ve trivialized, but which you should take seriously”, but maybe you do take the phrase seriously already, or him pointing at this still leads to you seeing the trivialized thing, since he hasn’t actually helped you realize its meaning.
I’ve just watched two episodes now, and while it’s interesting, it’s also… throwing up a lot of epistemic red flags for me.
He goes off on all these interesting tangents, but it feels more like “just so stories”. Like he can throw all this information at me to get me to nod along and follow where he’s going, without ever actually proving anything, and because there’s all these tangents I feel like he can slip stuff in without me noticing.
I’ve been listening to him for two hours now, and I still don’t quite get what his thesis is, except “There’s a meaning crisis.” I feel like he’s trying to push me towards a solution without being upfront from the beginning about what that solution is.… “Traditionalism”, maybe?
Or like maybe he’s saying something simple in a very complex and long-winded way in order to feel deep? But maybe that is the required method of saying it to get it deeper into your brain.
Here’s a single concrete thing he does that drives me nuts. I wonder if it may be a part of what is setting you off, too?
He overuses the term “unifying.” He uses it three times an episode, to mean a different thing than I would usually mean by it. I really wish he’d cut it out.
I usually see “unifying” as signifying that there is an overarching model that takes some of the complexity of several models, and collapses them down together. Something that reduces “special casing.”
He almost never means that. It’s always adding more, or tying together, or connecting bits without simplifying. It comes off to me like a string of broken promises.
In my notes, it means that I produce a ton of pre-emptive “Summary Here Headers” (for theory unifications that seem to never come), that I had to delete in the end. Because usually, there isn’t a deep shared root to summarize. When I come back to fill them in, all I find is a tangential binding that’s thin as a thread. Which is just not enough to cohesively summarize the next 3 things he talked about as if they were a single object.
I think his “big theory” is actually something more like… spoilers… which I wouldn’t have guessed at accurately from the first 2 episodes.
(I can’t get spoilers to work on markdown, ugh. Stop reading if you want to avoid them.)
Maybe “attention as a terrain,” or maybe something about aligning high-abstraction frames with embodied ones? The former feels basic to me at this point, but the later’s actually a pretty decent line of thought.
I can’t recall any specific examples of him using “Unifying” that way, but what you describe does ring familiar. I think he tends to use verbose language where unnecessary. I’d love to get the Paul-Graham-edited-for-simplicity version of these lectures.
He isn’t offering traditionalism, he recognizes that’s infeasible. He’s looking for something that’s compatible with science and rationality, but also achieves the same thing traditional systems achieved (like creating meaning, purpose, fulfillment, community, etc.) His solution is to create an “ecosystem of practices” (such as meditation, journaling, circling and such) that are practiced communally. Sometimes he also calls it “The religion that isn’t a religion”.
On the one hand, I think there’s still place for him to be clearer about his solution, on the other hand, he’s clear that he’s not actually sure yet how a solution would look like, and the purpose of this series is to define and understand the problem really well, and understand a bunch of background materiel that he expects will be relevant for finding a solution.
And yes, I think there’s room for simplifying. If not the thesis, then at least the presentation. He uses very complex vocabulary that I’m not sure is really necessary. To me it feels like it detracts rather than add.
Two episodes / two hours in and he hasn’t mentioned any of this that I recall. I feel like the introductory session should at least vaguely mention where he’s going to be steering BEFORE you’ve invested many hours.
I am pretty sympathetic to his reason for not doing this, which is something like “yes, at the end of the lecture you can say two sentences that feel to you like they capture the spirit. But do those two sentences have the power to transmit the spirit?” I think most summaries (mine included!) are papering over some of the inferential distance.
I do also think he’s much more tentative about proposed solutions than the problem. This isn’t a “I have a great new exercise plan which will solve the obesity crisis”, it’s closer to “we’re in an obesity crisis, this is the history of it and how I think the underlying physiological mechanisms work, and here’s what might be a sketch of a solution.” At which point foregrounding the sketch of the solution seems like it’s putting the emphasis in the wrong place.
Yoav’s reply seems right to me. Also:
Consider doing some epistemic spot checks, where you randomly select some claims and try to figure out if his story checks out. One of the benefits of something like this lecture club is with enough eyes, we can actually get decent coverage on all of the bits of the lecture, and figure out where he’s made mistakes or been misleading or so on, or if the number of mistakes is actually pretty low, end up confident in the remainder.
[I’m doing a more involved version of this that’s going to pay off for some of the later lectures, which is he references a bunch of works by more recent philosophers, and so I’m reading some of those books to try to better situate what he says / see how much his take and my take agree.]
The issue here is that the easy, straightforward facts are all legit to the best of my knowledge (e.g. the basic history of the Bronze Age collapse and such), but the points that his thesis is more strongly built upon are not just straightforward fact checks (e.g. Pretending to be a deer helps you hunt deer, and tribes with shamans outperformed tribes without, etc)
It’s like you list a bunch of real facts and real knowledge in order to make your point sound legit, and then put a bunch of wild speculation on top of it. (I’m not saying that’s what he’s doing, but that it’s a really easy thing to do, and really hard to tell apart).
I got somewhat of a similar feeling skipped into episode title that seemed more interesting. Now having myself “spoiled” ona couple of things it is more clear what he is doing with the presentation. He is using sophisticated opinion in choosing a partiuclar path/story and wants the path to be followable step-by-step to the one that is walking it.
It is a the difference between coming up with a proof vs explaining a proof.
In doing the reverse ordering I can make connections on what the talkpoints are later connected to. Presented here itis “shamans do wonky stuff and it somehow works” but in reference to later how it might be plausible that the wierd stuff has tangilble (understandable by me here now) advantages makes it a more dynamic landscape to think in. Part fo the point might be that the shamans might be able to pick up on the advantages and thus a reason to repeat the behaviour/technique but they might not have a good gear-level understanding what it is doing or why it is working (or they or some of them could but can’t neccesarily chare the insight to the uninitiated).
His digression about shamans really getting into the mindset of a deer in order to better track them reminds me of a skill “Pretending to Be” that I think is useful for many skills.
Episode 16: Christianity and Agape
A story people sometimes tell is a Garden of Eden sort of story: things were good, then somebody fucked it up, and now things are bad. Who fucked it up and how varies—was it Eve eating the apple, capitalism unleashing human greed, agriculture forcing toil?--but the basic attitude is one of resentment / debt. We have to struggle now to get back to where we ‘should’ be, if that’s even possible.
This has basically not been my sense of the world or of history. To me, it seems much more like “first there was nothing, then there was something and it sucked, and then it sucked a little less, over and over until now.” I am way wealthier than fictional Adam was, and even more so if you consider the actual historical Adam. When it sucked less, it’s normally because of something else fixing it, and giving the fix to you. The basic attitude is something like grateful inheritance.
Like, in a basic physical sense, there was a time before the sun existed, and now it exists, and basically all the material components of my life only exist in the form they do because of stars that existed in the past. In a social sense, I’m living in buildings that I didn’t build, using a language that I didn’t make, using tools that I didn’t invent, under a political system that I didn’t put into place. “Somebody else built that.”
And it’s not just that I found some abandoned ruins, or whatever; the people who built this wealth (often) wanted me to have it. Some of it I’ve exchanged for, but the vast majority of ‘my wealth’ is inherited. If there’s a principle behind this sort of saving up for the future / sweating so that progress happens, it seems like agape, and so the love that I have towards civilization is easy to backpropagate towards its source.
Now, Adam Smith might point out that it’s not the benevolence of the baker that I expect my dinner from, and one of the ways I frame things is capitalism as a way to direct civilization towards generative behavior (by tying it to consumption and status), in a way that leads to more creativity, which makes creative love more common and easier to see.
Anyway, this helped me understand my Christian parents better; I would talk with them sometimes about how I thought a lot about what the world would look like if God were in it, and what the world would look like if God weren’t in it, and how this world looked a lot like the second. They were confused by this, and thought the world looked a lot like the first; but when you think about a world with agape as a powerful force/motivator vs. a world without powerful agape, I think this world looks much more like the first world than the second world. Of course, that doesn’t imply Christianity is true, but makes it clearly part of the ‘intellectual heritage of humankind’, or something; we start off with cyclical religions, then we get religions of change, then we get a religion of progress through generous love.
Tillich (I’m still drawing from The Courage to Be) argues that Stoicism, while viable, is fundamentally unpopular because it picks renunciation instead of salvation.
That is, sure, if you do the right thing when you’re in control, focusing only on the things that you can control is satisfying. But if you do the wrong things when you’re in control, well, what then? It seems unsatisfying to say “well, I can’t control the past” and just forget about it, or to constantly be resetting your identity, or to not have a story of why this happens and how it could be better.
On Less Wrong, we don’t focus too much on sin and guilt, but there’s a ‘clear thinking’ analog when it comes to mistakes / confusions. The sort of ‘courage to think in spite of myth and falsehood’ feels very different from the sort of ‘courage to think in spite of fallibility and uncertainty’; I associate (perhaps unfairly) the first with ‘skeptics’, and the second with a sort of patient focus on smoothing out errors / reflecting on one’s own mistakes / operating on the best available knowledge without being stupefied that feels like the steel rationalist to me.
[There’s an old claim I don’t have a link to, where someone who was into the occult and eventually snapped out of it realized that lots of mystics would describe scientists as “unable to deal with uncertainty”, but this was projection; the scientific virtue was being able to clearly see and sit with your ignorance, whereas this person’s scene couldn’t handle ignorance, and so had to immediately paper over any holes with stories, even if those stories were fake.]
Also note the meta point here; if there’s one “ideal thinking” state, and people start off in lots of randomly different “worse thinking” states, moving towards ideal thinking will be a different direction for different people, and one of the worries about taking one person’s account of their transformation too seriously is typical minding (or, more weirdly, adopting the patterns of their pre-transformation mind so that you can follow the transformations that start from that point!).
Episode 15: Marcus Aurelius and Jesus
So Marcus Aurelius (and the Stoics) get 45 minutes of the lecture, and then Jesus and the short version of agape get the last 15 minutes. But the next lecture is mostly about expanding on those 15 minutes, and so the summary focuses on it. So here’s a brief list of the Stoic things he covers (mostly using quotes or paraphrases):
The Buddha was trying to make you realize how threatened you are, and you don’t have as much control as you think you do. Epictetus says the core of wisdom is in knowing what’s in your control and what’s not in your control, and stop pretending that things are in your control that aren’t.
Fromm, brought up before as distinguishing the having mode and the being mode, basically got that distinction from the Stoics.
The Stoics shifted focus from products (having mode) to process (being mode), because you have lots of control over the latter but not the former. This involves a lot of practices that are similar to mindfulness / remembering the being mode.
Marcus Aurelius writes a book, which shouldn’t be interpreted in the propositional way; it’s written to himself. It’s spiritual exercises.
Marcus Aurelius has the philosophical problems especially *because* he had power and fame. Unlike the Buddha, he doesn’t try to leave the palace; he doesn’t want to shirk his moral responsibilities (to use his power wisely).
The “view from above” helps you situate things correctly. Looking at situations from above, instead of your perspective, helps you be objective / treat others fairly.
Lots of modern CBT is basically just Stoicism; ‘internalizing Socrates’ is inculcating the sort of mental habits and doubt that dissolve incorrect thinking. “Everything I do is a failure!” “Everything?” asks Socrates.
Episode 34: Sacredness: Horror, Music, and the Symbol
So his core take on sacredness/horror/the numinous, as I understand it, is this:
Humans are limited and finite in a world that is much bigger than they are (both physically and conceptually).
An important part of existing in the world is ‘having a grip on things’; I’m imagining, like, a climber on a cliff or a remora stuck onto a shark; it’s not that you hold the world in your hands (as a thing smaller than you), but that you have control over your position, both resisting unwanted changes and making desired changes (at least in a limited way).
“The numinous” is that which is outside your ability to contain / understand, and is mostly on a dimension of ‘power / glory’ instead of morality. [More ‘Lovecraftian elder gods’ or ‘forces of history that can’t be stopped’ or so on.]
He’s pretty clear on what he means by ‘horror’. It’s “losing your grip on reality”, and so more associated with madness/insanity than fear. [He distinguishes it from ‘horror films’, which he thinks are mostly about the fear of predation, and ‘terror’, which he sees as too linked to ‘terrorism’; I observe that it reminds me a lot of ‘body horror’, which is about losing your grip on yourself, in some ways.]
I don’t quite get what he’s saying about sacredness. I think it’s something like: there’s a way in which the numinous is intrinsically horrifying because it involves something that you can’t really come to grips with (you can’t handle the cliff-as-a-whole even tho you can handle the cliff-as-many-handholds), but whether you experience this as ‘horror’ is mostly about whether or not you’re overwhelmed. In horror, you have the experience of scrambling for purchase and not finding anything and this is overwhelming. Sacredness then is more like the ‘good sort’ of seeing beyond yourself, in a way where you at least have a handle on not having a handle on it (or something?).
Like, I’m thinking of Augustine’s conception of the trinity, where he spent a long time trying to figure out this puzzle and then went “oh, this puzzle is beyond me”, in a way that was… reassuring to him, somehow? “I will sooner draw all the water from the sea and empty it into this hole than you will succeed in penetrating the mystery of the Holy Trinity with your limited understanding.” Vervaeke uses someone else’s phrase, of “homing against horror”, and the way that’s landing here is something like “having accepted that this is too big to contain” instead of “being freaked out about this being too big to contain.”
I think the overall move is something like: even if you grow as much as possible, there will still be things bigger than you; you need some way to handle that. But also there are things that are ‘just within your reach’; the way you grow is by hanging out around them, doing serious play with them, and so on; so you need something that helps you come to grips with your limited size in a way that puts you in positions to grow bigger (instead of giving up on growth as ‘impossible’ because you can never be infinitely large).
Regarding “Puzzle beig over me”, Vervaeke emphasises that puzzles are not mysteries. I think he is also getting to something extra with the numinous. It is weird to me as understanding Din, Nayru and Fafore as part of the triforce or the feud between Chattur’gha, Ulyaoth and Xel’lotath, yeah it might be interesting and there might not be a lot of head way, but it doesn’t at face value seem to have the personal bespokedness than the trinity approach despite on theme level being essentially the same or aspects of the same meme-complex. If I don’t ever discover what Twin Peaks was about sure it is is mystery but not a world-view affecting one. There seems to be some attempt at some sort of distinction which trinity has but triforce has not, but it doesn’t really materialise at the same level as the other concepts do.
He gave example of bad and moderate horror things but I very good and on poitnon horror memory activated for me.
Initially developed for the Nintendo 64, Eternal Darkness: Sanity’s Requirem was released for the GamCube. Protagonist inherits a mansion, father gets murdered, zombies get slashed etc.. The game had 3 pirmary resources, your health, your mana and your sanity. The lower your sanity got the more the game had permission to drive you mad. Standard effects woudl involve like people crying starting to get heard, extra blood on the walls. However the most extreme of these “insanity effects” would mess with your understanding that you were playing a game.
They are just cool I am going to give some examples.
The game would randomly draw which looked like a audio volume control bar and made it go low while silencing all sound output. This was likely to trick you to think that your TVs settings were altered.
The game would slowly make fuzzy blob coalesece into insect and spider shapes and then fade them out. The gradual onset would bypass most peoples change detection so you were not likley to notice it appearing but moving your eye to taht part of the screen for unrelated reason. This made it likely to to think tha a spider was crawling on top of your TV
The game would present a technical error screen, “blue screen”. This was likely to make you think that your GameCube was damaged. (The instruction booklet had claming words for those that went to check whether that is supposed to happen)
It was the days when you had to manually save your game. The game would boot you to the title screen and when you went to load your save it would show the memory card empty.
The game would make the torches flicker with a filter that would mimic a damaged speaker.
Upon existing a room it would randomly pull up a “The adventures will continue in the sequel” type of screen (making you think you were being left dry with a cliffhanger).
In a game scarse with ammo the game would suddenly present you a room full of ammo just to yank it away (got your hopes, up didn’t I?) This is 4th wall leaning because it is more about players awareness of game design rather than the world being depicted.
With all the talk about participatory knowing these are modes of horrors that are uniquely well adapted to be done on an interactive medium.
Episode 28: Convergence to Relevance Realization
As an autistic person at parts he seemed to argue that neurotypicality would be at the core of being intelligent.
The social situations (like asking for gas) he described he presented them as not working to never happen. However the ire and tensions when it doesn’t go as planned seemed very familiar to me. As the person giving advice I would have probably leaned more on using explicit distance descriptors (such as 250 meters) rather than nearby corner.
Connecting to how he was previously saying things he might actually mean that in order to be rational to need to be both intelligent and wise but he ends up saying that you can’t be intelligent if you are a spock. So the class of persons who are intelligent but unwise and therefore fail to be rational would be interesting. But because he uses intelligence in place of rationality here it reads as emotionally saliently offensive to me that he is saying that autistic people can’t be intelligent. Having signficant differences in salience landscape could be an interesting angle to look into autism. The previous example of most people filtering out the feeling of washlabels from their clothes is a real thing but for many autistic people the situation is so that they do feel and are bothered by the tactile feels.
So it is not human universal that he is pointing to but more of the neurotypical expererience. There might be interesting upsides on weighting global context heavily in ones operation but the claims that leaning into local context would be unlivable can easily start to read as xenoneurologically hateful. I don’t think that neurotypicality is at the core of humanity, and to the extent he is saying that an intelligent autistic person is an impossibility is is just wrong. It might be that he is mixing on what constitues him strongly on what constitues people in general strongly. The example of animal communication seems to resonate with neurotypicals being very social-dependent and the example of octopuses might have an analog in autistic people in that communication doesn’t need to be that essential to intelligence. The statement is carried with reservations but I get a feeling that he is not taking his own reservations as seriously as he would be wise to do. Previously he was saying that a good cogscientist pays attention to universals and recognising when assumed universal is broken not to be universal would seem to be a part of that.
I guess with the material he is presenting I can apply it to understand that because he doesn’t have the participatory knowing required his framing will be off and therefore the mistakes understandble. The analysis is interesting but with such core tenets being off will have large fallouts.
As a person that might not intuitively on a neurological level have salience tuning working having explicit and systematical understanding how to construct relevance seems like a sensible value proposition.
The issue of considering the right side effects, of course, made me think about EDT vs. CDT vs. FDT, tho here he’s making a simpler and more practical claim.
Episode 27: Problem Formulation
I can start to read intoa pattern where in the theorethical sections he can refer to concepts as previously known as they are “srpinkled in beforehand” where their previous appearance are justified but only like weakly justified, standing out as odd inclusions. I guess it helps with salience but it also feels like a manipulative technique as it makes it seems artificially profound, the first instances are pretty trivial and then they are reused in critical junctions. Like in the movie Inception, it is a revelation planned out by an outside force in order to achieve a goalstate of the outside agent.
Just bruteforcing a big search space feels to my brain to more be frustration rather than suicide, more of a null operation rather than being actively harmful. Sure it is an unwise move but part of the threat of it would be that it doesn’t “timeout” it doesn’t say itself that it is a bad idea (whereas having a thing like a sword in your stomach would probably make it pretty salient that this choice might not be the most conductive to biological prosperity).
I don’t now how much it is baked into the idea of heuristics but if you are stuck using only preselected heuristics then the lack of flexibility and blindspots are obvoius. What one would ideally want to do is come up with the heuristics on the fly and I guess that is part of what relevance realization is going to be about.
Having watched some things out of order I can see him struggle to keep the narrative in check seeing slips where how he is thinking about it is in conflict how the narrative is progressing.
It was weird that the covering problem in the part when “people usually frame it as a covering problem” it made my brain predict that “it is actually a parity problem”. But that impulse did not make it obvious what the trick was like. At the mention of what the colors of the removed squares are I predicted the “domino stands on different colored squares” property, how it is helpful and important. I was in this weird state that I vaguely had a hint of what the trick was about without it being obvious and not making all the connections.
I was thinking that part of the problem formulation is instead of seeing the problem as “ground level moves” bruteforcing throught the heuristics would be less combinatorily explosive. In this kind of search “just ehaust all options” would be rather quickly categorised as “doesn’t solve atleast fast”. And this process is likely recursive in that one could come up with strategies in which order to try heuristics. and in the other direction “searching all the options” is a stepping back of the kind of procedure “a thing I am doing doesn’t work (errors), lets do something else”. Frustration on this level would mean repeating the action and the error verbatim. This seems to be connected to “madness is doing the same thing and expecting different results” and the skill of saying oops.
A lot of lesswrongian values seems to be referenced with feeling of discovering them from a different angle. Here their importance on how they upkeep other systems is more pronounced. With previous expose on lesswrong it was more in the flavour of “here is a thing that you can aquire and is cool”.
Episode 4: Socrates and the Quest for Wisdom
Continuing on last week’s commentary, Socrates mostly makes sense as part of this move from the continuous cosmos (in which the Gods are physically real and power is what matters) to the two worlds mythology (in which the material world is low and a different world is high).
Like, we begin in a world where Power is Glorious, where Zeus commands respect because he can zap with you lightning bolts. If you read The Iliad, it’s full of people (and gods!) explicitly threatening each other. Aphrodite tells Helen to have sex with Paris, Helen doesn’t want to, and Aphrodite replies with “look, this is me being nice to you, do you want to see me being mean to you?”, and Helen goes through with it. Hera complains about Zeus, her son pleads with her to stop because he doesn’t want to stand idly by and watch Zeus beat her (standing idly by, of course, because Zeus could easily beat him as well). The importance of heroes is determined primarily by where they fall in the power ranking, rather than their moral qualities. The Achilles-Agamemnon conflict is mostly about how respect should be distributed between power and legitimacy. And we somehow end up in a world where Truth is Sacred.
Socrates does something that seems sort of astounding to me, which is conflate goodness and power strongly enough to insist “look, Zeus has to be a moral exemplar, otherwise he wouldn’t be a God.” A related perspective—”if God exists, we need to destroy him / put him on trial for his crimes”—seems pretty common in rationalist fiction, at least. From this perspective, refusing to bow from pressure from the citizens of Athens seems like the obvious move. “Look, either they’re right and I should accept the punishment, or they’re wrong and I’ll be a martyr for the truth, which is better than living without principles.”
There’s a parable that I like, about a monk and a samurai:
Normally I read this with the sense that “yes, you can redefine victory by changing your perspective, but only so far.” The monk can’t physically say “I am whole on the inside,” because he’s dead. But this is what Socrates is doing! He’s taking his ability to reframe ‘winning’ to its logical conclusion.
And, importantly, this is what’s happening in things like Functional Decision Theory, where one is trying to do the thing that leads to the logical you winning, instead of this particular you. You need that to be saved from the desert in Parfit’s Hitchhiker, as well as other problems, in a way that will show up more later.
[Two others come up in this lecture but don’t make it into the summary: Thales, the first philosopher thinking scientifically (by which we mean from a ‘causal systems’ perspective instead of a ‘mythological narrative’ perspective), and the sophists, who study persuasion independently of truthseeking.]
Just wanted to say that even if i don’t find something to say and don’t comment, i still enjoy reading the summery each day and especially your commentary, so thanks!
This seeking of both truth and relevance together feels so important. I wonder where in modern society we see this the most.
I like the concept in this lecture a lot of bullshitting vs. lying to yourself. Even in a lot of the self-help genre, which seems to be going after a similar goal to Socrates of becoming a good person, there is a lot of bullshit in the form of misguided values (fame, fortune, etc). We have few institutions, structures, or communities that enable people to strive over both truth and relevance.
Meta discussion about how to do this:
(This is the sort of place to complain that 5 lectures a week is too many, or to propose that we have a weekly discussion event in the Walled Garden, or so on.)
I’ll be in the Walled Garden to talk about lectures 1-5 from 4pm to 6pm (Pacific time) this Sunday; here’s the invite link.
I just realized that LW lets you embed YouTube videos in comments! I assume this was built in to the editor, rather than a feature we added?
Required some integration from both sides. But yeah, the new editor made it much easier.
So one thing I’m worried about is having a hard time navigating once we’re a few episodes in. Perhaps you could link in the main post to the comment for each episode?
Great idea, will do.
Lesswrong doesn’t have a “group”-like (user subthread) functionality, and I mostly think Lesswrong is currently not an optimal place to do “subscribe to a sequence of posts” content (...ironically?), since it doesn’t seem presently rigged for this.
(I thought they discontinued sequences functionality? They may have actually limited access to it to a karma score or something, and I’m holding this assumption weakly.)
These are counterbalanced for me by the accessibility/reach of LW (for audience and commenters) and the expected quality of comments, though. And it’s always possible to just provide in-text links to tie together a sequence. I think I’ve convinced myself not to push to change it; it’s a fine choice.
I’m… really curious to see how well “discussion-driven something-or-other” goes. I was a little disappointed with how little engagement the “Questions” section sometimes got, and I usually think of “Link-out w/ discussion” as a slightly-similar datatype.
I think if I wanted I could make this a Sequence of posts. I’m also quite curious to see how it goes.
For what it’s worth, I like having all of the discussion on one page (in part because coming back to this page shows you discussion on the other lectures), but maybe it will get unwieldy. [In the Old Days we had to break up the intro threads whenever they hit 500 comments or so, and quite possibly this post will end up with so many comments that we’ll have to break it up also. Probably the team is fine with me signing them up to do surgery on this post if necessary. :P]
BTW, everyone can make a sequence (the button is available on the /library page, deliberately a bit out-of-the-way for new users. Users with 1000 karma should see the menu-item right next to the “new post” button in the User menu)
Note that people can subscribe to posts-of-a-given-tag. (I agree you should also be able to subscribe to a sequence, but, this is a hack for now)
This post seems to be the meta-Lecture Club, not Episode 1, so I’m a tad confused about where to object-comment on Epi 1 (high-level? subthread on Episode 1 summary? Both seem a little suboptimal.)
This probably resolves itself as “just do a highest-level comment” after Epi 1, but I wanted to express the confusion.
Sorry about that; object-level commentary on Episode 1 should happen underneath the Episode 1 comment.
Meta discussion about why to do this:
(This is the sort of place to complain that this is off-topic for LW, or to say that you’re participating, or to talk about why participating makes sense or doesn’t.)
Anna Salamon on Twitter (talking about a different video, by a related person):
I’ve been having this lecture series recommended me by lots of different people, but so far haven’t gotten farther than reading through Valentine’s summaries. Maybe I’ll get around watching some of it now.
I’ve watched some of Vervaeke’s lectures, but they just seem to go on and on without ever reaching whatever his goal is. Likewise Jordan Peterson. Having just read through Valentine’s document (mainly the lecture summaries, rather than the detailed notes), I am still disappointed. Vervaeke just breaks off at the end, just as it seemed it might get interesting. It goes to lecture 26, the last of which suggests there are more to come. I look forward to summaries of them, but more with hope than with expectation.
Yeah, I think you’ll appreciate the summaries we end up with of the second half of the series.
I think this is both fair and unfair, and am trying to figure out how to articulate my sense of it.
I think there’s a way to consider thinking that views it as just being about truth/exactness/etc., and turning everything into propositional knowledge. I think there’s another way to consider thinking that views it as being a delicate balancing act between different layers of knowledge (propositional, procedural, perspectival, and participatory being the four that Vervaeke talks about frequently). I have a suspicion that a lot of his goal is transformative change in the audience, often by something like moving from thinking mostly about propositions to thinking in a balanced way, but from the propositional perspective this will end up seeming empty, or full of lots of things that don’t compile to propositions, or only do so vacuously.
“So what was his point? What does it boil down to?” “Well… boiling it isn’t a good mode of preparation, actually; it kills the nutritional value because it denatures the vitamin C.”
Talk of “his goal” reminds me of a line from SSC’s review of 12 Rules for Life: “But I actually acted as a slightly better person during the week or so I read Jordan Peterson’s book.” [Noting that Vervaeke isn’t trying to be a prophet, or make his own solution; I think he’s trying to do science on wisdom, and help people realize the situation that they / humanity are in.]
But anyway, let’s jump ahead a lot and talk about my main goal (ignoring, for a moment, the many secondary goals).
There’s a thing that LW-style rationalism holds near its core, which is “rationalists should win”. That is, the procedural commitments to rationality are because those commitments pay off (like the point of believing things is that they pay rent in anticipated experiences, etc.). The ‘art of refining human rationality’ is about developing more psychotechnologies that lead to more winning. It feels to me like there’s a big hole in our understanding that’s at least labeled in this series: the problem of ‘relevance realization’.
As an example, there’s a thing that LW-style Bayesianism does, which says “well, induction is solved in principle by Solomonoff Induction, we just need to make an approximator to that.” Vervaeke identifies this as the problem of combinatorial explosion: the underlying task is impossible, and so you need an impossible machine in order to accomplish it. [He doesn’t address SI directly, but if he did, I think he would describe it as “absurd”, meaning detached from reality, which it is!]
But actual humans somehow have a sense of what considerations are relevant in any particular case, and this has detail and internal structure to it, can be more or less appropriate, and thus should be a branch of psychoengineering. To the extent that AI alignment is about ‘developing machine wisdom’ to best use the machine intelligence, a mechanistic theory of developing human wisdom seems potentially fruitful as an area of study.
This is a cool idea. i watched the lecture series and also thought much of it is highly relevant to LW. speaking of relevance, i thought his idea of relevance realization was especially relevant, and even thought/tried to write a post about it. so I’m happy you started this :)
Question: Is making top level comments ok? or do you want to keep it to only the three you made already? if so maybe make another one for open discussion on the series?
Episode 38: Agape and 4E Cognitive Science
As someone used to ‘4E’ referring to, say, ‘fourth edition’, that the third generation of cogsci is called 4E is a bit confusing. But it stands for Embodiment, Embedded, Enactive, Extended; that is, human cognition is shaped by human bodies, happens in the physical world, is an action, and is extended through interactions with the world and psychotechnologies. [Consider how having a pencil and paper extends thought.]
The main upshot of all of this (besides being more current science) is that it’s a devastating response to Descartes. Actually the mind and body have a deep continuity between them; actually the mind and world have a deep continuity between them.
I should also note that the word ‘emergent’ shows up a lot, I think in a way that doesn’t fall afoul of The Futility of Emergence; they’re not saying “ok, intelligence is emergent, we’re done here”, they’re saying something more like “ok, intelligence emerges from many smaller-scale interactions”, in a way that clarifies what sort of aggregation is going on (contra Eliezer, I think there are things that aren’t well-described by ‘emergent’, and so it is actually adding some bits).
Episode 30: Relevance Realization Meets Dynamical Systems Theory
The overall picture makes somewhat sense but I have much trouble with the phrasings of the constraint details.
How do you know before hand whether a object of study is homogenous or not. Okay it seems plausible that “white objects” doesn’t have much. But I think scientific study of swans should definetely be “in”. Now does the existence of black swans means the homogenuity of the study object is ruined and study should be suspended? Gold might seem homogenous but this can be problematised. Somebody might think that “fools gold” is variety of gold. And even if gold is a specific amount of protons, there are lots of isotopes covered by it. Gold atoms can be in a variety of electronal excitement states. Their cores can be in excitement states that can relax into gamma ray emissions. That seems very heterogenous and the combining factor can seem a lot like being identical to the selection factor, you can only say that white things are white but likewise you can only say that gold things are gold (electronal excitements claims will be crap, isotopes claims will be crap etc).
If money is a attributed thing and attributive properties make for non-scientific use then does that make economics not be a scientific study in so far as it explores money?
The analog with evolution does clear it a fair bit. Darwin might have sough out for a statement like “To be fit is to be tall” with main focus on what property beside tall might make the statement actually true. But what the end result was was not a statement with that kind of structure.
I am starting to get a feel that like evolution enables (more) quantified husbandry, a theory of intelligence enables one to build AGI. Previously I have understood why AGI would be powerful but I guess with this line of reasoning the importance of understanding the theory of intelligence seems like the more pertient one. Even if we don’t have it yet, its place in culture would be similar to relativity or evolution.
Episode 13: Buddhism and Parasitic Processing
This was a very short summary, but I think both things it brings up are key:
1. Things like Buddhism were not ‘belief systems’ (which Vervaeke calls a ‘post-Christian’ way of looking at it) and instead were practices. Like, you could imagine people of the future trying to understand football propositionally, and they sort of could, but it’s mostly not about the propositions, for the athletes or the spectators. It’s about enacting the football game. They were transformative practices—you should be able to see the difference between someone before Buddhism and after it (at least if they did it right).
2. The in-depth look at a problem that Buddhism was trying to be a solution to (parasitic processing).
Plato tells us a story about anagoge; Buddhism tells a story about its opposite, and how to avoid that.
Episode 10: Consciousness
What are some of the metrics people use, to judge whether something felt “real?” What are some metrics used to resolve fork-conflicts, between different ways of making sense of the world?
What does it mean, when these are different, and how do you resolve that conflict?
(A few example conflicts: A dream that is obviously not self-consistent, but still makes useful predictions. A vivid memory you have, that none of your friends can recall. A high-confidence intuitive prediction you could make whose certainty colors your perception, but which others insist is based on invalid starting premises.)
A bit of context: I ended up with an odd connection between the way he described a “Realness-gauging heuristic,” and how Blockchain works, that I wanted to share. This eventually led to the question bubbling up.
Vervaeke mentioned that a problem with some Higher State of Consciousness (HSC) experiences is that some people experience an “Axial Revolution in miniature,” and decide that the real world is the dream, and their experience in the altered state was the reality. (Which they usually feel a need to return to, due to what he dubbed a “Platonic meta-drive” towards realness.)
Usually, with altered states (ex: literal dreaming), one ends up treating the altered state as a dream-like subjective experience, and understand your waking-life as reality. In these cases, this seems to get flipped.
To paraphrase Vervaeke...
The way I interpret this is that one of the common heuristics to ascertain “realness” is to search for the most extensive, highest-continuity, or most vividly experienced comprehension algorithm that you’ve ever built.
This calls faintly to mind fork-resolution in blockchains.
For the most part, blockchains branch constantly, but by design turn whatever is the longest and most-developed legal branch into the canonical one*. This is not purely continuous, since this is not always the same chain over time; one can overtake another. As long as it’s the the longest, it becomes the “valid” one.
While this is one of the simplest fork-resolution metics to explain, it is not the only one.
Other varieties of forking (ex: a git repo for a software package) may use other canonicity-resolution heuristics. Here’s a very common one: for a lot of projects, the most-built one is called an “Alpha” while the canonical version numbers are reserved for branches deemed debugged or “sufficiently stable.”
(It is also sometimes possible to provide an avenue for re-integrating or otherwise feeding an off-branch to a main one (ex: uncles), but this can get complicated rather quickly.)
* With the notable exception of hard-forks: a rare event, where there is a social move to quash the validity of a chain in which a substantial misuse has occurred. Coming up with similar cases in history or social reality is left as an exercise for the reader.
One of the things that impressed me a lot about Vervaeke in this episode was naming my crux and meeting it. Like I talk about in Steelmanning Divination, often I’ve written off something for good reasons, and then come across a statement of the thing that says “yes, it runs afoul of X and Y, but even knowing that I think you should look at Z,” and this is a pretty compelling reason to look at Z!
So Vervaeke is familiar with dreams, and expects his audience to be familiar with dreams. Your sense of how much things cohere can be hacked! I realized this as the result of direct experience many years ago, as presumably have most people, and so any claim of states of consciousness that are more in touch with reality than the default state of consciousness, rather than less in touch with it, has a high bar of evidence to clear. The default presumption should be “how are you sure it isn’t just hacking your sense of how much things cohere?”
Vervaeke is also familiar with the unreliability of the propositional knowledge that comes out of these experiences. Some people see God while high, other people see the absence of God while high. Surely this means it’s not a reliable source of knowledge. Contrast to fictional situations; if the DMT entities could in fact factor large numbers, this would be very compelling evidence about them! Or in the world of Control, people in the Astral Realm see a black pyramid, in a way that makes the propositional knowledge gained there reliable.
So Vervaeke’s story is: these mystical experiences are not about propositional knowledge.
This seems pretty promising to me as an account (tho it’s obviously not complete). Dreams might be random soup, but if I realize an error in my thinking because of a dream and that realization persists when I’m sober, and stands up to conversations with friends, then I can be pretty confident that I was in fact making a mistake before and the dream gave me whatever insight I needed to fix it. There might be some very deep mistakes that I’m making, such that I need very vivid dreams to fix them. see Mental Mountains for discussion along these lines.
But this is going a step further than that. Often people who wake up from a dream long to return to the dream once sober. I’m not sure how many would actually prefer a dream world to the real world, this is a common enough trope that I suspect ‘many’. From Inception:
As well, there’s an old point in AI alignment that, well, things that change your utility function are to be avoided by default. “Significance landscaping” is, essentially, the utility function; if I’m going to change that, I pretty clearly want to not change it randomly. Taking heroin, for example, would change my significance landscaping to make heroin much more significant to me. This seems like a bad move, and so I don’t. So in order to think this mystical experiences are better to have than not have, the connection to wisdom needs to developed.
[And also the line I’ve been bringing up so far—where if wisdom is choosing the ‘spiritual realm’ over the ‘secular realm’, then that’s actually a mistake if there’s just a secular realm—needs to be addressed. This is the ‘collapse of religion’ in miniature—if we used to use religion to get people to get over their irrationalities with the carrot of heaven, but people have now realized that heaven isn’t real and so the carrot is a trick, well, we still need some way to get people to get over their irrationalities, to the extent that’s a thing that’s good to do!]
Doesn’t detract from your point, but I find it interesting that you interpreted dreams as evidence in this direction rather than the opposite. After all, when we are awake, we know we are awake, and correctly feel that our reality is more coherent and true than dreams are. The opposite isn’t true: if we realize we’re dreaming, we typically also realize that the content isn’t true; we don’t end up thinking that dreams are actually more true that reality is. Rather, finding dreams to be coherent requires us to not realize we’re dreaming.
So feels like someone could just as easily have generalized this into saying “if there’s an alternate state that on an examination feels more true than ordinary wakefulness does, then it’s likely to actually be more true, in the same way as ordinary wakefulness both feels and is more true than dreams are”.
Yes I also noticed that with Vervaeke. He would often start talking about something that sounds crackpot-ish or like straight up bullshit, but then immediately mention my objection and go on to talk sense. Last episode had an example of that with “Quantum Change”, which is something i wouldn’t even bother listening to, but he immediately criticized the name and said that the theory is good in spite of it, so I was open to hearing it out.
Towards the end he’s talking about how these transformative experiences people have, these ‘quantum changes’, don’t give people any new knowledge, they give people more WISDOM. But his examples puzzled me.
He says, one person comes out of the transformatice experience and says “I knew that God exists”, and then another person comes out and says “I knew that there was no God.”
So my question is, what kind of valid “wisdom” can produce BOTH of those results? Is it just a type of wisdom that transforms the meaning each of these people assigns to the word God?
Around 53-55 minutes of the podcast if anyone wants to see what i’m referring to.
I’m not quite sure what you mean by “transforms the meaning”; but I agree with at least one version of that.
The way I’d elaborate on it is that “God exists” is more like an internal label for internal experience instead of a shared label for shared experience. Two people talking about ‘the sun’ can be pretty sure they’re talking about the same thing in the outside world; not so for two people talking about God.
And so in a transformative experience, someone might shift their anchor beliefs, and they might not have better labels for those beliefs than “God exists” or “God doesn’t exist”, while those point to different things in more complicated language. (For example, one idea that I might compress into “God exists” is “it is better to face life in an open-hearted and loving way”, and another idea that I might compress down to “God doesn’t exist” is “wishful thinking doesn’t accomplish anything, planning does”. Both of those more complicated beliefs can be simultaneously true!)
love this response. thanks
Episode 6: Aristotle, Kant, and Evolution
I’m commenting before finishing because I wanted this thought out of me:
I’m at the part where Kant is talking about the circular nature of biological feedback systems, and how when he traces out the logic it’s circular and therefor biology is, in some way, unsolvable.
It occurs to me that the feedback cycle of a tree (as the main example given) isn’t CIRCULAR, it’s a SPIRAL. In a circle, you go around and end up where you started. There’s no advancement, no change beyond your position on the circle. But a tree does advance. The roots gather the neutrients to grow leaves. The leaves harness energy to grow deeper roots, make the tree bigger, sprout new branches. The roots are now deeper than when they started, and keep getting deeper still, and the leaves are more plentiful than when they started, and keep getting more plentiful still. There’s a Cycle, sure, but not a Circle, it’s a spiral going ever upward.
And maybe, just maybe, Kang’s bid for the presidency appealing to the idea of ‘moving upward, twirling, twirling’ suddenly makes a lot of sense.
I like and agree with the discussion of cultivating character. The stronger and wiser our character, the more we will act in accordance to what is “right” and the more we can bring about positive experiences for ourselves and others.
But as I see it, the ultimate goal is better experiences for conscious creatures. So I am skeptical of the goal of living up to our potential by striving after what makes us most human. I think such a virtue could only be derived from how it may lead to better lives for living things (which I think it would as Aristotle defines it). But similarly it would not necessarily be good for the world if our first human ancestors 200,000 years ago all lived up to strictly what made them most human. The goal of what makes us uniquely human seems interesting, but beside the point.
I also think that we tend to find cultivating our character meaningful, but there’s no need for this secondary goal to then decide to cultivate your own character.
Episode 49: Corbin and Jung
The summary at the beginning of the next episode pretty quickly shifts to new material, so here’s the key quote according to me:
Episode 48: Corbin and the Divine Double
Episode 47: Heidegger
Episode 46: Conclusion and the Prophets of the Meaning Crisis
The first 45 lectures have been, to some extent, “how did we get here, and where is here anyway?”, and these remaining five lectures are something like “what do other people think about being here?” This episode mostly touches on Husserl (who doesn’t really make it into the summary at the beginning of the next episode).
Episode 45: The Nature of Wisdom
I think the ‘summary’ portion of the next lecture goes out to about 8 minutes, but I’m cutting it off at about 5, in part because there’s a lot of tying together / elaborating / concluding to it.
Episode 44: Theories of Wisdom
Episode 43: Wisdom and Virtue
Episode 42: Intelligence, Rationality, and Wisdom
The main bit of this episode that stuck with me was the reframing of growth mindset (see SSC’s commentary on it). Roughly, Vervaeke’s story is that the growth mindset studies are impressive (I think he’s a little too credulous but w/e), but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable. [If there’s a way to turn effort into having a higher g, we haven’t found it, despite lots of looking.] But when we split cognition into intelligence and rationality, it seems pretty obvious that it’s possible to turn effort into increased rationality, and growth mindset seems quite appropriate there.
Is this true? Having looked into it, it doesn’t seem super true. Like, my guess is IQ is about as variable as competence measurements of most diverse skills. You can’t easily run any “did this intervention increase IQ?” studies, because IQ-tests are highly game-able, so we don’t actually have any specific studies of real interventions on this topic.
My current guess is that you can totally just increase IQ in a general sense, not many people do it because it requires deliberate practice, and I am kind of frustrated at everyone saying it’s fixed. The retest correlation of IQ is only like 0.8 after 20 years! That’s likely less than your retest correlation for basketball skills, or music instrument playing, or any of the other skills we think of as highly trainable. Of course, it’s less clear how to train IQ since we have less obvious feedback mechanisms, but I just don’t get where this myth of IQ being unchangeable comes from. We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
I’m not sure which claim this is, but I think in general the ability to game IQ tests is what they’re trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don’t think this is that relevant for population-level comparisons.]
So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the ‘fadeout’ effect that the effect of those interventions are short-lived. I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
In adulthood, things definitely change: generally for the worse. People make a big distinction between ‘fluid intelligence’ and ‘crystallized intelligence’, where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one’s cohort.
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major ‘inborn’ variation that should show up in the statistics over time.]
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it’s probably quite small (on fluid intelligence, at least).
Yep. 0.8 is retest correlation among adults. Also, like, I don’t know of any big studies that tried to increase adult IQ with anything that doesn’t seem like it’s just obviously going to fail. There are lots of “here is a cheap intervention we can run for $50 per participant”, but those obviously don’t work for any task that already has substantial training time invested in it, or covers a large battery of tests.
Yep, definitely not just education. Also lots of other factors.
One of the problems here is that IQ is age-normalized. In absolute terms you are actually almost always seeing very substantial subcomponent drift and changes, the way they change just tend to be correlated among different individuals (i.e. people go through changing in similar ways at the same age). This exaggerates any retest-correlations compared to a thing like a basketball test, which wouldn’t be age-normalized.
To make my epistemic state here a bit more clear: I do think IQ is clearly less trainable than much narrower skills like “how many numbers can you memorize in a row?”. But I don’t think IQ is less trainable than any other set of complicated skills like “programming skill” or “architecture design” skill.
My current guess is that if you control for people who know how to program and you run a research program with about as much sophistication as current IQ studies on “can we improve people’s programming skills” you would find results that are about as convincing saying “no, you can’t improve people’s programming skill”. But this seems pretty dumb to me. We know of many groups that have substantially outperformed other groups in programming skill, and my inside-view here totally outweighs the relatively weak outside-view from the mediocre studies we are running. I also bet you would find that programming skill is really highly heritable (probably more heritable than IQ), and then people would go around saying that programming skill is genetic and can’t be changed, because everyone keeps confusing heritability with genetics and it’s terrible.
This doesn’t mean increasing programming skill is easy. It actually seems kind of hard, but it also doesn’t seem impossible, and from the perspective of a private individual “getting better at programming” is a totally reasonable thing to do, even if “make a large group of people much better at programming” is a really hard thing to do that I don’t have a ton of traction on. I feel similarly about IQ. “Getting better at whatever IQ tests are measuring” is a pretty reasonable thing to do. “Design a large scale scalable intervention that makes everyone much better” is much harder and I have much less traction on that.
I think laying out your thoughts on this would make a great top-level post. Starting from your comments here and then adding a bit more detail.
Do you happen to remember the source for this? I’m having trouble finding any studies that seem to bear directly on the question.
Episode 41: What is Rationality?
A long summary (as is typical for ‘multi-part’ episodes; this is the second of three episodes on rationality, which is bridging to three episodes on wisdom). I think the rationality debate is mostly… old news, or something? It’s nice to see the ‘purely academic’ version of it, but there aren’t really any surprises, and Vervaeke is coming at it from a view that seems pretty close to “rationalists should win” to me.
Episode 40: Wisdom and Rationality
Episode 39: The Religion of No Religion
So things are starting to come together.
In particular, I think this makes it a bit clearer on what he means by religio if it’s explicitly contrasted with credo; differences in credo are primarily about different propositions that are asserted (i.e. you can tell what religion a person is by how they answer a multiple choice test), and differences in religio are more about different ‘actions that are taken’ in some broader way (i.e. you can tell what religion a person is by how they live their life).
In my understanding of Vervaeke’s view, religions that used to be useful as worldviews and communities of practice for legitimating and encouraging individual growth fell apart (both in the sense that they are no longer seen as legitimate, and also I think because they are no longer doing the best at encouraging growth / anagoge). The first ‘pseudoreligions’ to form were the products of the overall historical trend towards systems and propositions: as Europe dispensed with the religio and kept the credo, we got a version of Christianity that dispensed with prayer and agape and kept around the doctrinal creeds and the crusades, to much suffering and regret.
So the thing that we need to do is restore the parts of religion focused on growth and improvement—not just individually, but also collectively. To the extent there are propositional beliefs, they are about facilitating the anagogic process rather than the ultimate end point.
Of course, a lot of this is how I think about rationality and Less Wrong and the associated community. Just like it might not make much sense to talk about ‘bodybuilding enthusiasts’ who don’t build their bodies, it doesn’t make much sense to talk about ‘aspiring rationalists’ who don’t develop their habit of mind. There’s a surrounding worldview that ascribes special importance to this—it’s not just a hobby, and is much more like a ‘way of life’.
At CFAR workshops, one of the tips that we would often give people at the beginning was “we’re going to teach you techniques, but the workshop isn’t really about these specific skills; it’s about the skill of developing techniques, of which these are examples,” in a way that lines up exactly with Vervaeke’s “meta-psychotechnology for the creating the ecology of psychotechnology.”
Episode 37: Reverse Engineering Enlightenment, Part 2
Episode 36: Religio/Perennial Problems/Reverse Engineering Enlightenment
The bit about relevance not being ‘absolute’ or ‘essential’ reminds me of Excluding the Supernatural; for a deity to be ‘actually divine’ instead of just ‘really powerful’ or w/e it needs to be intrinsically relevant. But, interestingly, I don’t think this is a standard that’s possible to hit, basically because of Vervaeke’s critique!
For example, assume I set up hyper-Minecraft, where the villagers are basically emulated humans (and so able to think, do philosophy, etc.), and I sometimes log in and wander around the world, using my admin powers as I see fit. There’s a way in which I am ‘ontologically basic’ from the perspective of those villagers—I’m a mental entity that’s not reducible to within-universe nonmental entities. [And also I’m keyed into the laws of physics in a way that makes me immensely powerful, and so clearly relevant to their materialistic aims!]
But there’s nothing stopping a Diogenes in this world from only asking me to just step out of their sunlight when I offer to grant them any wish. There’s nothing stopping a Socrates from saying “sure, this Vaniver character can reshape the landscape at will, but actually being a god is about morality and truth instead of power.”
Now, maybe it’s a mistake for them to care about morality instead of power; maybe philosophy of this sort is selected against. But on whatever standard philosophy fails on, it can honestly report that it was aiming for a different standard. [Somehow this is reminding me of C.S. Lewis’s claim that the most important sin is pride; basically, in this frame, the ability to choose something other than God’s choice because of centering your standards instead of His standards.]
If you want to play the minecraft server as a torture simulator then the philosophers of the world would be correct in identiying the act as evil rather than givign you licence to be good by fiat of omnipotence.
As authors of books and such we could make it an utopia for the characters if we wished. Yet we find the world more compelling as a book if it has a world of partial misery. And I think this applies even for within the worlds perspective—activating god mode or easy mode could make the existence so structureless that it would be an absurd horror.
Strictly såpeaking a minecrafts servers cosmologist might come up with floating point rounding as an explanation for the peculiar structure of the Farlands. Strucure of C# or Java could be become the subject of their physics etc. Judging what is “not reducible” for an arbitrarily fine science is hard business. You are ontologically basic only relative to a pretty trivial ontology. And in a very real sense if their brain runs on silicon and yours runs on carbon you are on the same level ontologically ie the minecraft world is a real embedding and detail in the real world.
The property of “indispensible” smells a lot like a posibility claim. I think the arguments for inexhaustability and indispensibility can be applied backwards to evolution. Evolution never stops, there is no “final evolved form” of an organism. But you can be stuck as a crododile for millenia. It is not always the case that there is an optimization to be made. For indispensibility it means that most of the features of animals are subject to some selection pressure. If you deprive an animal an important feature it will be selected against that is part of machinery of natural selection. But there are parts that have neglible selection pressure which can undergo a lot of neutral drift. Just because the animal itself likes some part of its body doesn’t mean it actually is subject to selection pressure (althought fetishing important features can easily turn life-promoting). And if you remove one feature then the selection pressure on the other parts goes up. If you think a certain religion is “indispensible” for you, if we forcefully take it away from you, you will graps at any remaning and will try to generate religio in order to ward of absurdity. An one migth succeed in staying sane.
It occurred to me that what is threatening in the meaning crisis is a bit nebolous to me. One understanding is that historical forces have promoted intelligence and have not promoted wisdom to the same degree making us on the balance less rational. So it could be also called the “impending fooldom”. I guess I get that absurdity is not a nice feeling but when compared to death in evolution it is less clear how important avoiding that fail state is. There is an attempt to link it to being able to pursue goals. If we grow too fool then we do not attain our goals and don’t even realise we didn’t attain them, or cease to have goals in the first place.
A bit of an antonym way of understanding this I linked in my mind the persistence of the perennial problems and the adjective “eternal” in the Tome of Eternal Darkness (from the game cube game). It is always going to be there. The relevance realization is to be informed and aware of your structured and meaning in order to conciously direct them. The allure of magic is to be able to benefit from forces one doesn’t understand. So any sufficiently understood method is a technology and any sufficiently clouded method is a magic (possibly with a “k”). Picking up the tome of eternal darkness, using letters one doesn’t know, to spell words one doesn’t know to effects one can’t imagine is a dangerous business which is prone to make you mixed up in matters one doesn’t understand and exceedingly drift out of ones control. I guess I am also remainded of the game Control which also features themes of dealing with edge phenomena (and it could be argued that it tries to be a 5th wall breaking game in that the relationship between Polaris and Candidate 7 tries to be an enactive analogy to the ends of triggering a psychological restructuring of the players ego)
The tome of eternal darkness works on powers that lower ones enlightnement level. Exposure to secrets of the world that make sane interactivity with the world hard (ie horror and madness). He referred to what he was trying to get at the history arc and I got the impression that our efforts are currently making us fools. Another interesting classification could be people that are of high wisdom but low intelligence and therefore fail to be rational (the superstitous?). The “algorithmical thinking” forces were probably a lot more constructive if those people were a bigger portion of the population.
Episode 35: The Symbol, Sacredness, and the Sacred
It seems to me that he might be getting at some different concept with “symbol” that I would get out of it. I kind of get that if you have a religion you might pave out a “learning curve” of insights and that the same objects would get resused at multiple levels. But what distinguishes a symbol from a non-symbol, can some symbols be better than others? (I think that “symbolic value” migth make sense in the sense of affording participatory transformation, but then symbolic value can vary)
The inextinguishablesness also seems a bit wonky. I get that if somebody reliably gets new and new insights it might make sense to treat it as ever-producing but I doubt whether they truly have this property. Like Plato is only finitely insightful, it seems plausible that one day when returing to the text it ceases to speak, that there would be diminishing returns or that the growth one gets from the reading is all of the readers insight and none of the writers. I am thinking how a river might be inexhaustible, it is hard to drink a river dry and because of the hydrocycle one can rely being able to drink daily. But weather patterns change and using the river to irrigate a too big of a field can actually dry up a river. If you have a water bottle with you then withholding from drinking means you get to drink more later. With a river spending more time drinking means more water total gotten.
Having a business that generates a profit can make for exponential growth. But that is a different thing than having infinite money. So in a similar sense “that one is above the line in transcendence” might be an important thing, but it is not a “I win” button.
Your point about inexhaustibility rings true to me, and reminds me of a broader question about anagoge (for personal development), engineering (for technological development), and science (for understanding the physical world); is it actually an infinite staircase going up (or deeper, in the case of scientific theories), or is there ‘completion’ (in the sense that pretty quickly we’ll be able to make the best possible spaceships, have the best possible wisdom, have the complete theory of everything, etc.)?
It feels really dangerous to have an orientation that presupposes growth, or puts all of the value on growth, in a universe that might actually be finite. But also it feels really dangerous to assume that you’ve grown all you can, and there’s nothing more to do, when in fact you just don’t see the next door!
Verveake was losing me in these parts of the series (though I did finish it). His overuse of complex language makes it extremely hard to understand what he’s talking about. And that also makes it hard to evaluate, use or further explain.
So some quick refreshers on earlier concepts: Vervaeke thinks that humans are evolved, and that means lots and lots of ‘exaptation’, where something originally created for purpose A turns out to also be useful for purpose B, and develops to satisfy both purposes. The tongue is an example, of originally being useful for moving stuff around in the mouth (lots of animals have tongues that can do that) but then also being useful for speech, rather than creating a new speech organ from scratch (few animals that have tongues capable of speech, and this actually seems like the limiting factor in getting dogs to talk with us, for example).
But exaptation doesn’t just happen with body parts, it also happens cognitively. The thing that Vervaeke thinks the symbol is doing is giving us access to the ‘history’ of the thing, in a way that reminds me of UtEB and memory reconsolidation; rather than just going off the ‘current sense of justice’ or w/e the symbol of justice gives you a way to handle the parts of it and its justification all at once, making it easier to reflect on justice and change your mind about it / develop it in contact with more of your experience.
Anagoge is a sort of philosophical self-development / ascent towards the true/good.
Religio, is, uh, the parts of religion that relevance realization is related to? I’ll figure out a better explanation at some point.
Episode 33: The Spirituality of RR: Wonder/Awe/Mystery/Sacredness
Religio is one of those terms that I never quite absorbed the meaning of and so was always a bit confused whenever he used it later on.
As this is a place where he defines it, it would seem that “Operating System” could be a close synonym.
I do think that the connotations of the word are quite far-fetched on what it is supposed to “technically mean”
I am a stricler for possibility claims. The part where he thinks that deducing that from the difficulty of getting a perspective on death to deducde that consioucness is immortal I think is a very good grounded critique and explanation what robs people the wrong way in traditional religiousness. However there was a claim “no matter what one tries one can’t get a phenomenological perspective on being dead”.
One of the edge cases is that there is a live action show Upload where the protagonist was hosting his own funeral. This bypasses some of the things one might or or talk about. Sure they are a conciousness that has “hereness and nowness”, salience about who attends etc. So in that sense it is not. Also being in a mainframe has a good chance of having qualitatively different religio when compared with a neuron brain. But it could being these sense of “what it feels to not be me”, when you are a new person or a different kind of person then you can attain state that has epistemological value. Even with just biological brains it leaves the possibility of developing a multiplepersonality disorder in order to “kill” your starting personality. I guess this is partly why the goodbyes are said to the friend near Mr Robots ending, continuing the story without the architect would make the analog transfer less. Also the question on the feelings that the Doctor of Doctor Who goes when regeneration is about to onset. Is it proper to be sad? What is being forfeit that makes sadness a proper emotion?
Part of the promise of “psychotechnologies” is to get more “perspectival knowledge” via “serious play”. Does or does not partaking in those 3 afforementioned series as audience impact perspectival knowledge and is it different in a mattering way to the kinds of perspectival knowledges that are claimed to be impossible?
The idea of protecting against domicide seems to resonate with me with trying to live in a world that is essentially incompatible with ones neurotype.
What strikes me as odd that he is treating the threat as something that will offcourse be needed to defend against. I can understand that not defending against need for food will lead into starvation and cessation of biological burning. But so what if we are spiked by anxiety? He seems to treat it like an unviable mode of being. And by contrast I feel like “eh, learn to live with it rather than avoid it from occurring”
I have watched Dr K a bit and on need he will very quickly introduce the separation that whatever you are percieving can’t be you. Which I guess in an atempt to use Lesswrongian lingo would be “what is on the map can’t be the map” which I guess is just another aspect/iteration of “the map is not the territority”.
Episode 32: RR in the Brain, Insight, and Consciousness
All of them arrows. RR is in four places in that connected graph. The main critisim against nazism was that it was a conspiracy and I am being very weary of not letting the sensible bits let undue credence to the other bits.
Saying that there are 3 kinds of networks is like saying that there are 3 types of temperatures: cold, hot and temperate. Being unfair or uncharasteristic in the bits that I know makes it plausible that he gets the other areas taht the invokes wrong in a impactful way plausible. Sure one can’t be the master of all the fields in order to get the connections between them, but to me is walking on way shakier ground than he thinks he is.
Episode 31: Embodied-Embedded RR as Dynamical-Developmental GI
I cannot distinguish this from GPT-3. Is it just me?
This is one of the less-edited transcripts; I often try to change it from one long sentence, which is appropriate for talks, to many smaller sentences and paragraphs, which reads better online; also I try to delete false starts and so on. I’ve been busier and putting less time into the editing, so some of the quality decrease from previous summaries is me.
I’m also becoming less confident that his ‘reminders at the beginning of the next lecture’ are the right summaries to use; they’re much more “ok, here’s where we were, now let’s keep going” instead of “here’s the main change from the last lecture, now let’s look at the next topic in order.”
[There’s also a big inferential distance problem here, where he’s built up some jargon and summarizes his points in that jargon, which (of course) does not make the points any easier to transfer. Like, this really isn’t a substitute for the lectures yet!]
If I edited it, I’m not sure there would be anything left. :)
Yeah, I wanted to comment on that second paragraph being way overly complex, but didn’t have much to say apart from that. Your description seems apt. I hope at least he knows what he’s talking about with all these words. But in terms of communicating these ideas, that does not do the job. (And my memory is that I felt pretty much the same while watching the full lecture, even though i really like his idea of relevance realization)
The style starts to go from laid out presentation to more thinking aloud.
There are a lot of arrows going on, they have different kinds of meaning and there are two particularly ill-defined and handwavy horizontal lines.
The argument for G.I. and such “proving” that RR is unified doesn’t really go througt for me. g-factor is formulated by picking questions which have correlations on having them correct over different participants. If Intelligence or RR was a scattered thing the methodology would not be able to show that, the questions that would show capacity diversity would be ill-formed questions and not included in the tests. He is dismissing the problems as fine details but I think he is relying them in about that accuracy level, it is improper to wave them away.
One can talk about whether the g-factor is wide os small but its existence is not interesting. And existence of different kinds of intelligence points to a scattered direction. As I have understood visual-spatial reasoning can be formulated as a different dimension from language proficiency. In a school metaphor, society and students might talk alot about their “averages” but on another level they get a separate grade for each subject. That society fusses a lot over averages doesn’t mean that there is a special unified “schoolness” ability that allows one to run fast at physical education and manipulate symbols fast at math. There are other factors beside g-factor and the g-factor being supersalient because of its popularity would seem to smell like a potential illusion.
The style is a bit wavy in that point we make very fine distinctions and at other points we are being very handwavy. In the parts where he refers more to work done by others it seems more of misapplication. A lot of it seems it could be interesting but it also constantly feels that details are getting trampled over left and right. It might be because I am watching partly out of order but he has theme were is is annoyed by important concepts becoming trivialized. But I realised that such trivializations are a product of relevance realization, in order to use an english idiom you primarily need its current symbolic meaning and you do next to nothing with the etymology unless you are doing a special thing like trying to search for connections between concepts. But according to RR this “cut to be fast” is the way how to be efficient and choose to do the important thing with the toys available (here the idiom). When there are lot of clever turns of phrases that reveal the subparts the revelations seems plausible and valuble but when he decries on why everybody doesn’t see the world like he sees it, that is like demanding or expecting that everybody is a philosopher or a linguist. So I am feeling that there should be more “this is what it is cool about being me” and “going along this path gedts you these kind of things” but less of “you should be more like me”, “I know who you should be”.
Episode 29: Getting to the Depths of Relevance Realization
The part about FINST and demonstrative reference made me think about localizing in sign language. You can make the sign for an entity and point to a place in the ‘sign space’ in front of you, so that later you can refer back to the entity by referring to (pointing to, making signs at) that place. You could set up multiple entities in the space, and later discard them again and place new ones.
My understanding of (Dutch) sign language is only rudimentary so this should be taken with a grain of salt, but it’s an interesting connection nonetheless.
Episode 26: Cognitive Science
I’m pretty sure that’s the entire summary at the start of the next lecture? So I suppose I’ll try to summarize some bits of it:
“Cognitive science is born out of a particular way in which the scientific study of mind has unfolded. … Mind refers to different levels of the reality of mind, with different disciplines that use different vocabularies, different theoretical styles of argumentation, different means of measuring phenomena, different ways of gathering evidence.”
Neuroscience talks about the brain, using patterns of neuroactivity using fMRI etc.
AGI / machine learning talks about the information processing of the brain; algorithms, heuristics, etc.
Psychologists talk about behavior, working memory, problem solving, decision-making.
There are lots of bridges between things; psycholinguistics bridges between linguistics and psychology to try to figure out how the physical brain communicates, which relies touching both the study of the physical brain and the study of communication.
Equivocation (question substitution / using a concept from one level to work in another leve) is the main thing to watch out for here, as it leads to bullshitting yourself. “No integration through equivocation!” Philosophy is useful mostly because it’s about conceptual crispness / noticing and counteracting this equivocation while bridging between levels.
His main sketch of a way out of the meaning crisis is that we need to develop the cognitive science of meaning cultivation. [My commentary is that this seems like the right sort of meta-process; like, rather than having ‘a philosopher’ you have ‘philosophy’ or something, and so error-correction seems much easier. You lose out on the unifying vision of one creative genius, but that’s the price you pay for avoiding blind spots.]
Episode 25: The Clash
(The last third of this episode is warming up the transition from history to science, and so I extended a bit from the recap to the introduction of episode 26, in a way that mirrors episode 25′s structure.)
This is the end of the history analysis arc and I feel like I don’t get a lot out of it and I have only hazy idea why its inclusions in its length was proper.
He has a huge boner for axial age discoveries and then details how the findings get muddled or distorted later. I guess he has a program where he wants to salvage and focus on a couple of key nuggets from the axial age and leave rest of the gravel away but I would be more interested the selection critderia on why keep those bits or what kind of interessting soup one can make with the ingredients rather than a list of reviews why previous soups tasted bland.
Yeah, I noticed being confused by this also the second time around. I’ve got a few guesses for what’s going on.
John is a guy with a theory (about relevance realization), the theory explains some stuff, but the way to sell it is to tie it to something bigger. [“All of history is culminating in this moment!”]
John is a guy who constantly comes across lots of objections, and the general answer to those objections is a detailed dive through all of history. [“Eliezer, did you really have to write so many words about how to think in order to talk about AI alignment?” “Yes.”]
John is trying to convince people who are coming at this from the history side to take the science side seriously, and giving his spin on how all of the relevant history comes to bear is table stakes. [This is like the previous one, but who asked for the focus on that is flipped.]
Actually the series is mostly about “where we are, and how we got here,” and so it’s more like the history is the content and the cognitive science is the secondary content. So it’s not “why is half of this history?” and more “why did he tack on another 25 lectures afterwards?”
But I am noticing that quite probably I should just recommend the latter bits to people interested in relevance realization and not the history?
This feels to me a bit like the normal style of philosophy (or history of science or so on); you maybe talk a little about what it is that you’re hoping for with a theory of astronomy or theories in general, but you spend most of your time talking about “ok, these are the observations that theory A got wrong, and this is how theory B accounted for them”, and if you’re a working astronomer today, you spend most of your time thinking about “ok, what is up with these observations that my theory doesn’t account for?”
I do think this comes up sometimes; like when he talks about homuncular explanations and why those are unsatisfying, that feels to me like it’s transferring the general technique that helps people do good cognitive science instead of just being a poor review of a single soup.
Howdy. I think his concern with the history is that he wants to reduce equivocation in debate surrounding consciousness (he is clear about this in his ‘Untangling the Worldknot of Consciousness’ miniseries with Gregg Henriques, though he does point to this in early AftMC episodes) by showing that so much of what we take to be natural to our cognition is largely the result of invented psychotechnology and (at least seemingly) insightful changes to our cultural cognitive grammar. It is incredibly standard for us to immediately obviate solved problems, and when something is obvious to us, we often have incredible difficulty seeing how it could have ever been otherwise.
I agree. I also think that part is the better part of the series, and I can see myself recommending to people to watch just the first part, but not just the second. Though the second part explores some important concepts (like relevance realization) I think there’s a lot of room for improvement on the delivery, where I think the first part is quite well done.
I think the two things that most bothered me in the second part were his overuse of complicated language, and his overuse of caveats (I get why he makes them, but it breaks the flow and makes it so much harder to follow, especially together with all the complicated language)
Episode 24: Hegel
So I feel like there’s a Schopenhauer-like response here, which is something like… “development is the joke that the civilization plays on the individual”? That is, you might go about your life thinking there’s some deeper purpose to your life or some great spiritual growth on offer, but actually what really matters is a hundred thousand people all being gears in a giant machine to make slightly better semiconductors, which then serves as gears in another giant machine, and the whole thing is aware of this process of using material progress to advance material progress. One can view the scientific / capitalistic revolution eating the world as the narrowly propositional / materialistic forces competing against the balanced / spiritualistic forces and just actually delivering the goods in a much more obvious way.
Like, it’s a coincidence that this paulfchristiano post came out yesterday, but it somehow feels very relevant for thinking about material dialecticism.
My inner Vervaeke responds with “but you pay a terrible price for that!”, and he’s right; if you give up on individual development / experience, then the bottom falls out and you end up with Bostrom’s Disneyland with no children.
Episode 23: Romanticism
Another short summary, so let me summarize.
Kant’s trying to unify the two halves separated by Descartes; he proposes a shift where the mathematical, rather than being ‘out there’, is a lens that you apply to reality. “Math isn’t discovering reality, math is ultimately about how the mind imposes a structure on reality so it can reason about it.” Vervaeke comments: “that’s a really big price you pay for getting the two sides of Descartes back together!”
A quick description of predictive processing, how actually there does seem to be a filtering thing going on. Bottom-up and top-down processing are “completely interpenetrating in a completely self-organizing manner outside of your cognitive awareness.”
The Romantic reaction to this Kantian model is to notice that the closer you are to the mind / the more rational you are, the more you’re in your abstract frame and out of touch. So in order to get closer to reality, you have to move further from the mind / from rationality / from math.
Jung is basically Kantian epistemology plus gnostic mythology.
Vervaeke’s very ambivalent about the Romantics because they’re after contact with reality and they’re trying to recapture the lost perspectival / participatory knowledge. But because they’re in a Kantian framework, they think they get that by going into the depths of the irrational aspects of the mind.
The Romantics become anti-empiricists; the empiricists view the mind as a blank state that’s impressed on by experience, the world is an empty canvas on which imagination expresses itself. (Vervaeke thinks both are wrong; I think they remind me a lot of no-self and self, which I view as interrelated like the taijitu.)
“[Romanticism] is a pseudoreligious ideology so it sweeps the continent but it’s like spiritual junk food. It’s tasty, but it’s not nutritious, and so what happens to it? Well it quickly gets translated into nastier forms, not without first of all setting the world on fire. Romanticism plays a big role in the rise of the French Revolution and the Napoleonic Wars.”
Romanticism fails; it wants to be the replacement for Christianity and doesn’t, but it also doesn’t go away.
Schopenhauer takes the Romantic notion of outward motion (from mind to world) as ‘imagination’ and replaces it with ‘will’. Previous depictions put the ‘head’ above the ‘stomach’, visualizing humans as using reason to overcome their passions; Schopenhauer inverts this order, where the stomach (will) is the driver, and reason is its servant.
Schopenhauer is pretty down on the ‘will to live’ (“sex is a cruel joke played on the individual by the species” → your life is shaped by striving for something that doesn’t really benefit you and isn’t worth it)
Nietzsche takes will to live and replaces it with the will to power. Nietzsche sees the Lutheran version of Christianity as about suppressing the capacity for self-transcendence, and the will to power as recapturing it. (Vervaeke thinks the core problem with Nietzsche is that you have self-transcendence without the machinery for dealing with self-deception, which is what rationality ultimately is.)
Also, I’ll pull out this quote where he tries to summarize the overall project:
Episode 22: Descartes vs. Hobbes
So I gave a talk on the Meaning Crisis on Sunday in the Walled Garden, which mostly was about the agent-arena relationship and some other stuff, and among other things I pointed out that part of the crisis here is a growing sophistication of concepts that breaks down ‘useful bucket errors’ at earlier stages. “It’s fine for Plato to say that truth is goodness and goodness is truth, but we have clearer concepts now and have counterexamples of truth that’s not good and goodness that’s not true.” Zvi pushed back; ‘well, how sure are we about those counterexamples?’
After sleeping on it, I think “actually, they’re more like type definitions than they are like counterexamples.” If one thing is about a correspondence between descriptions of worlds and our particular world, and the other thing is about a correspondence between descriptions of worlds and real numbers that indicate how much one ought to prefer those worlds, for them to be exactly equal you need to a very strange utility function. And it’s much, much harder to make them line up if you have a difference version of ‘good’ than ‘consequentialist utility theory’, as that gives you different types.
Continuing on the type distinction, Vervaeke talks a lot about these four varieties of knowledge: propositional, procedural, perspectival, and participatory. But the Cartesian view is really only comfortable with the propositional knowing. [Actually, isn’t it also about the participatory knowing of being your mind touching itself? But I suppose that’s only a very narrow subset of participatory knowledge.]
One of the things that came up in the conversation was the way in which ‘everything’ can be compiled to propositional knowledge. My favorite example of this is Solomonoff Induction; it’s a formal method for updating on observations to determine what the underlying program for a computable world is. First, you run all possible programs to get their output streams, you compare those output streams against the actual observations you get, and you rule out all programs who disagree with actual observations, and then you have a distribution over the remaining programs to predict what future observations from the world will be. This ‘works’ if by ‘works’ you mean “couldn’t possibly be implemented.” So, good enough for the mathematicians. ;)
But armed with the same style of argument as Solomonoff Induction, you could make the case that really all other things are propositional (in an important way). My participatory knowing—what it’s like to be me participating in an experience—cashes out in terms of physical facts about my brain, and a complicated tower of inferences that recognizes those physical facts as being an instance of participatory knowing. That complicated tower of inferences is a program that could be implemented (and thus is present in) Solomonoff Induction. There might be more things in heaven and earth than are dreamt of in Horatio’s philosophy, but not Solomonoff’s. [Well, except incomputable things, but who cares about those anyway.]
I notice that I’m finding myself more and more dissatisfied with this sort of ‘emulation’ argument. That is, consider the Church-Turing argument that if you have the ability to do general computation, you can implement any other method of doing general computation, and so differences between programming languages / computing substrate / whatever are philosophically irrelevant. But if you’re an engineer instead of a philosopher, this sort of emulation can actually be fiendishly difficult, and require horrifying slowdowns. In reality, thinking of things in the way they’re actually implemented helps you carve reality at the joints / think better thoughts more quickly.
I’m not yet sure how to wrap this up nicely. I think there’s a pitfall where these sorts of emulators / compilers / etc. are used, not necessarily as curiosity-stoppers, but as finesse-stoppers? Like, you could learn how to build skills for dealing with this sort of time, but because it’s philosophically solved, you don’t have the sort of drive to grow.
But I don’t have the positive version of this crystallized yet. I do think it looks something like balance, like trying to be strong in lots of different ways, instead of pretending that a particular way is all-encompassing.
Episode 21: Martin Luther and Descartes
Typo: “if matter is real the we can build a material computer” should be then
Vervaeke’s split between Luther and Descartes reminded me of SSC’s On First Looking into Chapman’s “Pop Bayesianism”, but the camps are importantly different. There, Aristotelianism is the camp of certainty, and Anton-Wilsonism the camp of anti-certainty. Here, both Luther and Descartes are after certainty; Luther thinks you get it by a sort of ‘pick it and stick with it’ faith (which is importantly detached from action, but not necessarily from evidence!), whereas Descartes thinks you get it from careful deductive reasoning.
Luther found the practise of “buying your sins away” that the catholic church was doing as highly bad. In that way “no you can’t wealth out of moral dimension in anyway” makes it more sensible. The catholic church was turning into a organization that exercised political and societal control rather than spiritual service.
In science if you make multiple replications you would expect them to have similar outcomes if the phenomen in question is in fact real. If there is a central authority that has beforehand decided what the result should be then that is not true measurement. Thus freeing all the differerent experiment runners to indiviudal verify the result rather than leaning on a opinion leader makes for a more reality-sensitive process. Luther thinking that persons should read the book for themselfs rather than have it read for them doesn’t seem so obviously lead to fractuation.
IMO it does, because 1) people’s innate judgment / different life experiences / contrarianism / etc. can lead them to disagree on interpretation. [Relevant xkcd] If your centralized authority is a person who can respond to events and questions, it’s obvious what the Pope says you should do about X, whereas if it’s a book that needs to be interpreted, people can more easily disagree about what the Bible says you should do about X.
Note also that a centralized authority both discourages rather than encourages that sort of disagreement and directs status-seeking to climbing the hierarchy instead of finding a thing to disagree on.
Episode 20: Death of the Universe
Episode 19: Augustine and Aquinas
Incidentally, this is one of the reasons why I (as someone who deeply prefers text-based communication media to audio or visual ones) nevertheless encourage people to actually watch the videos.
It also points to one of the big meta-issues; part of what’s happening is modularization and specialization. Reading used to be a big package deal that got you lots of things, and now it’s a narrow focused tool that does what it does very well, but doesn’t give you the other parts of the package deal. As far as I can tell, we’re better off with rapid silent consumptive reading than just having access to Lectio Divina. But there’s a big price for this in coherence, as all of the various components of your life become necessarily detached from each other so that they can be interchangeable.
Episode 18: Plotinus and Neoplatonism
Again, Vervaeke on returning to something that ‘worked’ in the past:
Episode 17: Gnosis and Existential Inertia
In looking for another post, I found SSC’s Against Anton-Wilsonism, which I think makes the same point as Vervaeke repeatedly makes, of being against a sort of pick-and-choose autodidactic approach to mysticism, as opposed to taking a package deal from a sapiential and supportive community, and actually putting in the calories.
Wait, isn’t that what Vervaeke himself does? Or does he do it himself because he thinks he’s proficient enough and is putting enough effort into it, and is willing to risk failure for the chance of finding new ground, but thinks in general people should pick up a ready-made bundle and roll with it? Perhaps the ecosystem of practices he’s trying to develop?
I don’t think he’s doing the autodidactic thing. Like, he studies wisdom as a scientist, but I think personally he practices tai chi and meditation in part because they’re tried-and-true with the sort of supportive community that he talks up in many places. Much of this lecture series is, I think, not his material, and is instead other people’s work and other people’s analysis, passed through his filters. [He doesn’t mention this until later, but he’s not trying to be a prophet / start a religion / etc.]
From that post:
Sounds like a good article, only learning about rationality and not actually learning rationality is definitely a core failure mode in learning rationality. Has Scott or anyone wrote about it?
The example of thinking whether one should become a vampire was very resonant with me because I had watched “LA by night”. Whether you should make your children your childer, whether you should embrace your lover, indeed the basis of the decision can be obscured and there are information asymmetries. The angst of the characters agonising over the harsnesses they always didn’t have choice themselfs to opt into. That is what happens when the transformation is opted or forced into and it turns out to be a bad choice?
Bleed is not neccesarily always a sought after phenomenon. Being able to distinguish the player and the character is very often desirable. Bleed can make you explore things you didn’t want explored althought I guess it does enable one to explore things one couldn’t be informed on whether they want them explored.
The lingo of “agent and arena” I don’t have a full grap on but for the setting of the vampire role paying game having a name for the arena “World of Darkness” makes it click it me more. Having all the different kinds of agents have their own lingo and understanding of the world also makes it clearer. One can talk about the differences between humans and vampires or the difference between kine and kindred. In a way it is about the same objects but the choice of terminology is more natural to one kind of agent over the other.
Episode 14: Epicureans, Cynics, and Stoics
At about 20 minutes in, he says that as a cognitive scientist, the evidence that your mind and your consciousnessare completely dependent on and emergent from your brain is overwhelming. Now, I agree with this, and I can think of various examples that lead me to believe that that’s the occam’s razor position, but I’m curious if anybody can point me to any central source of resources for information to prove this. My basis for thinking this, as a layman, isn’t as rigorous or complete as I would like.
There are two main alternative hypotheses you might want to contrast that with: dualism and “body-mind”.
For dualism, the theory is that the mind is happening somewhere else (a mental plane) and “pushing into” the body. Think, like, a video game being played by a person; the character isn’t doing the generating of the mind or consciousness, that’s all happening on the other side of the screen. IMO the most compelling external evidence against this comes from brain damage cases, of which the most famous and one of the earliest was Phineas Gage, and the most compelling internal evidence comes from brain-affecting chemicals. (You still need some external evidence to show that the chemicals are affecting the brain / nervous system specifically). If the brain were just an antenna receiving input from the mental realm, instead of the place where the action is happening, it would be weird to have functional errors connected so tightly to physical errors. (I think there are maybe people who still hold this position? Or believe in dualism for weirder reasons.)
For “body-mind”, the theory is that the mind isn’t just happening inside the skull; it’s happening throughout the whole body, or in connection with other parts of the environment, and so on. I think in response people mostly go “ok by ‘brain’ I meant ‘nervous system’, which is mostly your brain”, but again we look at the cases where people have lost parts of their body that aren’t their brain and see how much effect that has on consciousness, and the result is mostly quite small. (Looking at amputees, one gets the sense that not much of the mind is happening in arms and legs, whereas looking at patients who have lost bits of their brain, one gets the sense that lots of the mind is happening there.) People whose habits and cognition have become dependent on some external features—like looking things up in their phone, or conferring with colleagues, or so on—do often have their behavior and performance interrupted by losing those things, but it seems harder to argue their consciousness is affected.
These sorts of things are definitely along the lines of the examples I had in mind as well. Thanks for the reply.
From The Courage to Be, by Paul Tillich, discussing the Stoics:
Episode 12: Higher States of Consciousness, Part 2
So this completes Vervaeke’s account (for now) of what’s going on with mystical experiences. They don’t do the thing we might want them to (give us access to propositional knowledge), but they give us some sort of non-propositional guidance through a way to vary our internals in a way that lets us experiment with things / untrap our priors.
This also explains some about why it would be ineffable: consider the difference between describing an idealized algorithm and describing a pernicious bug you found in your code. The first is simple and formal, with many of the details abstracted; the second is almost entirely about the details. Most of these experiences are more like exposing psychological bugs so that they can be reimplemented, in a way that’s not going to generalize between people (as everyone’s implementation of that bit of their psychology will likely be different).
But… I’m not convinced it’s an asymmetric weapon, yet. The thing where you randomly increase variation sometimes breaks you out of bad spots, but it sometimes puts you into bad spots. I think Vervaeke would respond: that’s what the whole practice and community built around it is for! Someone who goes on a trip supported by other people, who know how to cultivate wisdom and challenge foolishness, is much better off than an autodidact who tries it on their own, maybe missing a core preparatory step or foolishness-challenging skill. Also, maybe this is just ‘for extreme cases’; for example, you might want to give psychedelics to almost everyone with PTSD, but almost no one without PTSD.
[I’m not sure why I’m focusing on psychedelics here, since part of the point of meditation is to get to these states, and he seems pretty bullish on everyone doing meditation. I think it’s that the risk for psychedelics seems much higher, and so the story has to be more convincing?]
Episode 11: Higher States of Consciousness, Part 1
I also found hints of your steelmanning divination argument in here:
He was making the case for a random walk through the space of things we’re not changing in order to help us find what we might be doing wrong.
There’s two or three terms he brings up again in this episode that he hasn’t used for a while, and I find the terms very ungooglable, and I can’t remember how he defined them earlier in the conversation—“exacted”, “exceptation” and maybe “exactation”.
Can anybody help me out here?
 is he saying Exapting instead of Exacting? And by Excapting, is he meaning something like “the repurposing of existing tools for new purposes”?
Episode 9: Insight
Incidentally, when he first introduces ‘quantum change’ he says “This is known as quantum change. Bad name, bad name. Good theory.”
Episode 8: The Buddha and “Mindfulness”
Episode 5: Plato and the Cave
As an aside—Vervaeke says,
My hot take before this series was that Socrates probably had it coming, tho I think the previous episode gave me a much more positive impression of Socrates. [There’s a thing Vervaeke will do a lot in this series, where he tries to distance “talking about X the actual historical figure” (about which there might be a lot of controversy) and “talking about X as understood by the intellectual history” (about which there might be much less controversy). You might not think you have good enough records of Jesus’s existence to be confident about what actually happened with Jesus or whether he even existed, but you should think you have good enough records of Christian theology’s thoughts about Jesus to be confident about the history of ideas. Here I’m both noting that “I wasn’t there”, hence the ‘probably’, but also am floating a hypothesis that hinges on the facts on the ground.]
A few years ago, it seemed to me like one of the big problems with LessWrong was that the generativity and selectivity were unbalanced. There wasn’t much new material posted on LW, and various commenters said “well, the thing we should do is be even harsher to authors, so that they produce better stuff!”, and when I went around asking the authors what it would take for them to write more on LW, they said “well, putting up with harsh comments is a huge drawback to posting on LW, so I don’t.”
Now, it would have been one thing if it were the top writers criticizing things—if, say, Eliezer or Scott or whoever had said “actually, I don’t really want my posts to be seen next to low-quality posts by <authors>” or had been skewering the flaws in those posts/comments. [Indeed, many great Sequences posts begin by quoting a reaction in the comments to a previous post and then dissecting why the reaction is wrong.] But instead, the commenter most frequently complained about by the former authors was a person who did not themselves write posts.
Now, the specific person I had been thinking of had been around for a long time. In fact, when they first started posting, their comments reminded me of my comments from a few years earlier, and so I marked them as someone to watch. But whereas I acculturated to LW (and I remember uprooting a few deep habits to do so!), I didn’t see it happen with them, and then realized that when I had been around, there had been lots of old LWers to acculturate to, whereas now the ‘typical comment’ was this sort of criticism, instead of the old LW spirit.
“Oh,” I said in a flash of insight. “This is why they executed Socrates.”
That is, imagine you’re responsible for some Athenian institution; you have the strong belief that the survival of your society depends on how much people buy into the Athenian institutions, and how much they buy into a status-allocation structure whereby the people who sweat and build receive lots of credit, and the people who idly criticize receive little credit. From this viewpoint, Socrates looks like someone who has found a clever exploit; a tactic wherein one can win any fight by only attacking and never defending. One can place immense burdens on any positive action (“Oh, so this is annoying? How would you define annoying?”) while not accepting any burdens of their own (“I’m just asking questions.”). One of Socrates’s innovations is a sort of shamelessness—if someone responds with “only a fool doesn’t understand what ‘annoying’ means!”, Socrates is happy to respond with “indeed, I am a fool, so can you explain it to me?”, whereas someone more cooperative would bow to what Everybody Knows.
Now, one Socrates is good to have around, in the same way that one court jester is good to have around. Dilbert did a lot to improve the corporate culture of the US, as people started asking themselves “oh wait, am I the pointy-haired boss in this scenario?”, or employees found it easier to coordinate around mocking particular sorts of bad behavior. This commenter, in a healthy community of lots of authors and lots of rewarding discussion, likely would have been a good addition to the soup. But “corrupting the youth” is serious—existentially serious! If it looks like a whole generation wants to grow up to be jesters instead of kings and knights and counsellors, then the very polis itself is at risk. [And, like, risk of murder and enslavement; existential questions are big deals!] If it looked like the next generation of LWers were going to acculturate to being this sort of nonconstructive critic, well, that seemed like reason to ban that commenter, or shut down LW to make it a monument instead of a ghost town.
Note that this doesn’t even require that Socrates himself is ‘doing it wrong’ or is unbalanced or so on—it just matters that people imitating Socrates are discouraging builders and not themselves building anything. [My guess is that Socrates was in fact ‘unbalanced,’ in the way that you should expect pioneers to be by default; in part, I remember thinking he had a principled stance against writing.]
Once I read a book which began by noting that Socrates/Plato/Aristotle might not have been the first philosophers / people to ask their questions or give their answers. But they were the first that successfully started the philosophical tradition that made its way to us. This proposes the flip side: you can imagine some other city, in Greece or elsewhere, wherein the social credit allocation system is hacked by a Socratic cancer, and then the society implodes, and it didn’t start a philosophical tradition that made it to us.
Again, did this actually happen? Idk, maybe Socrates just got on the wrong side of a corrupt gangster or w/e, and then next generation was fine / improved by Socrates, or if this had actually been happening Socrates would have become a meta-contrarian, continuing to lead them towards The Good. [I do think ‘internalizing Socrates’ (in the way Vervaeke will talk about) is a good idea! I’m less sure about emulating Socrates.]
But, like, there’s a claim I saw and wished I had saved the citation of, where a university professor teaching an ethics class or w/e gets their students to design policies that achieve ends, and finds that the students (especially more ‘woke’ ones) have very sharp critical instincts, can see all of the ways in which policies are unfair or problematic or so on, and then are very reluctant to design policies themselves, and are missing the skills to do anything that they can’t poke holes in (or, indeed, missing the acceptance that sometimes tradeoffs require accepting that the plan will have problems). In creative fields, this is sometimes called the Taste Gap, where doing well is hard in part because you can recognize good work before you can do it, and so the experience of making art is the experience of repeatedly producing disappointing work.
In order to get the anagogic ascent, you need both the criticism of Socrates and the courage to keep on producing disappointing work (and thus a system that rewards those in balanced ways).
This is a great comment.
This part in particular feels very clarifying.
I’ll also observe that an antibody was planted in our culture with this post, and this one.
A core bit of this episode that didn’t make it into Vervaeke’s summary is the idea of ‘structural functional organization’. The core example is a bird; if you tried to define a bird as the ‘sum’ of its parts, you would be missing out on the difference between a bird (where all the parts are carefully integrated together into a particular structure) and a bloody pile of parts (if they’re cluelessly jumbled together). The intricate relationships between things that make up its structure determine its function, which determines its identity in an important way.
This, of course, ties into the whole project. You need to not just have a list of facts about the mind, but a structural functional organizational account of the mind.
This is not related to the real purpose of all these talks, but I’ve wanted to run this idea by someone for a while:
Quantum Mechanics proves that, in a sense, the reality we know very much is a Plato’s Cave type situation. In other words, everything we experience is part of a shallower ‘shadow reality’ that is causally connected to, but distinct from or just a small part of, the true nature of reality.
If the deep nature of reality is that everything we think of as “a particle” exists in superposition, exists in this ever-evolving world of configuration-states, but the only thing we ever experience is one tiny part of that superposition, one tiny slice of configuration-space, then… we’re living in shadows of a sort.
Please tell me if that’s stupid.
My favorite thought from this lecture was the idea that our discounting of future possibilities is adaptive, and a failure to discount unlikely futures could be a cause of anxiety. If all the ways you might die in 30 years was as salient to you as what you are going to eat for breakfast, your mind would never stop worrying.
In times of extreme comfort like ours, it is much easier to highly value and consider the future. This makes it easy to overthink the possible impacts of present actions on the future, and while those impacts are real, they are often so hard to predict over the vast range of possibilities that they are worth discounting and disregarding.
One of the common modern recommendations for worrying less and being happier is to do more hard things and struggle more. Present struggles force you to highly value the present—things that make you struggle are going to make you find the present salient, and figure out how to improve the present quickly. There’s no room to think about the future when doing hard things in the present, so challenging yourself in this way may help us retain our adaptive discounting of future possibilities in our comfortable environment.
I wonder if this is why I play hours of video games every day...
Episode 3: Continuous Cosmos and Modern World Grammar
This feels like the central bit of this lecture to me, both because it points at the right way to understand myth and is also highly relevant to old conflicts between epistemic rationality and instrumental rationality.
His sense of myth seems very similar to Peterson’s: the world as forum for action. [Vervaeke will use the phrase ‘agent-arena relationship’ a lot.] The materialist worldview is concerned with the transition probability between states; the mythological worldview is concerned with the value function of states and the policy over actions. [Those are connected but importantly distinct.] Modern myths are things like “go to college” or “recycle plastics”—by which I don’t mean that college isn’t real, or that going to college doesn’t have real benefits, or that you shouldn’t recycle. I mean something more like “choosing not to go to college, or to not recycle, feels distant from propositional beliefs in an important way.” Think of The Fireplace Delusion by Sam Harris. [I once attended a lecture where the professor gave a coherent and clear argument against recycling, and then at the end of the lecture he stood by the trash cans / recycling bins to see how it would alter attendee behavior; at most 10% fewer people recycled that for other, ‘control’ lectures. If asked, people’s sense was less “I wasn’t convinced” and more “being convinced about the claims in the lecture doesn’t shift my sense of whether or not it’s good for me to recycle.”]
So the claim here is not just “well, we used to believe in God and now we don’t”, the claim is something more like “there used to be a strong shared motivation to do this sort of self-improvement, deepening in connection, and enhancement of wisdom, that was more like ‘go to college’ than it was like a propositional belief.” [Noting, of course, that only about a third of Americans today graduate from college, and many more don’t have the sense that they should or could go to college; in the past, presumably many people didn’t have this strong shared motivation. But the past had bubbles too, and here I’m interested mostly in the bubbles that were ancestors of my / our bubble.]
Earlier he talks about wisdom and prudence in the pre-Axial societies. I’ll characterize wisdom as ‘the thing that leads to winning’, and so prudence / rationality was something like “knowing your place in the power structure in order to live long and prosper.” Very materialist, very temporal / secular. Post Axial Revolution, there’s a sense of the ‘material world’ which is temporary and fake in some important way, and the ‘immaterial world’ which is timeless and real in some important way. Wisdom now involves not getting tricked by the temporary materialist games, and instead doing the thing that’s more important or deeper; winning at the real game instead of the distraction.
Some rationalists talk sometimes (jokingly? unclear) about Bayes points as a score that they accumulate over the course of their life, and then eventually are judged on. This is very not the ‘continuous cosmos’ view, and instead is in the ‘two-worlds’ view. The thing where one’s commitment to epistemic rationality is deeper than their commitment to instrumental rationality feels like it has to be bound up in the two-worlds approach somehow; if you actually only cared about winning in the materialist sense, your behavior would be different. There’s something about the rationalist allergy to self-deception that I think can be justified in the continuous cosmos, but it takes work.
He mentions around 34 minutes in that faith has changed meaning, that it didn’t used to mean believing ridiculous things without evidence, that it meant more about knowing that you’re on course.
I wonder if there’s good evidence for that. He mentions a lot through this series so far that ancient mythology wasn’t about literally beleiving that stuff was actually happening. I find myself doubting that claim, and I’d like to see some evidence.
When wondering about the connection between “cosmos” and “cosmetics” my thought was that cosmetics is about apperances, that make-up conceals and presents the thing as different. The kind of meaning he was going for was about “revealing” which was pretty much in the opposite direction.
The connections can seem a bit tenous but it feels better when one can see that he knows what he is trying to do with them. Althought it is a more goal-oriented presentation rather than a dispassionate and evenhanded search of the handiest direction. And I guess there is also value in giving an example of the thing being talked about rather than just talking about it.
I’m curious if you can summarize the relevance to embedded agency. This many hours of listening seems like quite a commitment, even at 2x. Is it really worth it? (Sometimes I have a commute or other time when it’s great to have something to listen to, but this isn’t currently true.)
Probably the main idea Vaniver is talking here is Relevance Realization, which John starts talking about in episode 28 (He stays on the topic for at least a few episodes, see the playlist). But if that also seems like much, you can read his paper Relevance Realization and the Emerging Framework in Cognitive Science. Might not be quite as in depth, but it goes over the important stuff.
Of course, i might be wrong about which idea Vaniver was talking about :)
Only sort of. Yoav correctly points to Vervaeke’s new contribution, but I think I’m more impressed with his perspective than with his hypothesis?
That is, he thinks the core thing underlying wisdom is relevance realization, which I’m going to simply describe as the ability to identify what bits of the world (physical and logical) influence each other, in a way which drives how you should look at the world and what actions you should take. [If you think about AlphaGo, ‘relevance realization’ is like using the value network to drive the MCTS, but for a full agent, it bears more deeply on more aspects of cognition.]
But this feels like one step: yes, you have determined that wisdom is about realizing relevance, but how to do you that? What does it look like to do that successfully, or poorly?
Here, the history of human thought becomes much more important. “The human condition”, and the perennial problems, and all that are basically the problems of embedded agency (in the context of living in a civilization, at least). Humans have built up significant practices and institutions around dealing with those problems. Here I’m more optimistic about, say, you or Scott hearing Vervaeke describe the problems and previous solutions and drawing your own connections to Embedded Agency and imagining your own solutions, more than I am excited about you just tasting his conclusions and deciding whether to accept or reject them.
Like, saying “instead of building clever robots, we need to build wise robots” doesn’t make much progress. Saying “an aspect of human wisdom is this sort of metacognition that searches for insights that determine how one is misframing reality” leads to “well, can we formalize that sort of metacognition?”.
[In particular, a guess I have about something that will be generative is grappling with the way humans have felt that wisdom was a developmental trajectory—a climbing up / climbing towards / going deeper—more than a static object, or a state that one reaches and then is complete. Like, I notice the more I think about human psychological developmental stages, the more I view particular formalizations of how to think and be in the world as “depicting a particular stage” instead of “depicting how cognition has to be.”]