Wow, that’s really cool to learn! I only have a beginner level knowledge of functional programming concepts and was not aware of hylomorphisms and unfolds (just basics like fold left, fold right). Thanks for bringing that to my attention, I might try to read that whole series.
Not so long ago I hit upon a definition of rationality which better captured what everything people actually use the term to refer to.
I think a solid definition of rationality is: trying to do better on purpose.
Instead of following your default procedure, you pull back and do something different to get a better result. This might be thinking in ways other than the default or doing things in ways other than the default.
A natural consequence of tying to do better on purpose is that you look for higher-level improvements rather than purely immediate, domain-specific concrete ones. Many people think to practice and train skills, but the rationalist seeks to reflect on how they train and process. Many people work towards goals, the rationalist pauses to reflect on their selection goals.
So why care about rationality? Because you want to do better.
If you believe that it is possible to do better, and that doing better results in more of what you want—then surely you would want that.
It was in the context of God’s actions, but I’ve always liked the phrase they taught me in school:
סוף סוף במחשבה תחילה
The end, the end, [that which was] first in thought.
Translates imprecisely, but flows so nicely in Hebrew.
No worries! Maybe we can get to the bottom of it another time, maybe another place. :)
Thanks for the elaboration. Yes, I see what you mean by brute force, and I also see how my post might be read to be advising an approach similar to what you described. I don’t know whether a pragmatic approach like that is a good developmental stage to go through? Maybe for a bit, but I’m not sure.
If the post didn’t shed any light on how a brute force approach is not the only option and not necessarily the best, I think it’s because I forgot that someone might approach motivation in that way. Only reading your description brought it back into my mind.
Go back five to six years I did have a phase when I was very big on “discipline”, I certainly tried to muster willpower to make myself do things—but it was never that successful or systematized. Around the time I did begin making more serious efforts to be productive I was already engaged with CFAR, reading mindingourway.com, and generally being coached into an approach of non-willpower-reliance and non-self-coercion. Yet it must have been long enough ago that I think I’d forgotten that there’s a very a natural approach to motivation where you pile on productivity tricks in a not quite sustainable/healthy way.
So, thanks for pointing that all out. That’s a good reminder.
For the public record, I think ideal motivation is attained when you have something resembling a state of harmony in your mind and with yourself. You might take actions to make actions seem more attractive and/or do things to decrease temptation, but it isn’t coercive or depleting. This is difficult to achieve and requires a lot introspection, self-awareness, resolving inner conflicts, etc., etc. If you’re doing it right, you’re not suffering. You don’t crash. It doesn’t feel like you’re coercing yourself.
It’s possible I should have stated something like that in the post itself.
Hmm, I’d like to step back and tally the different claims that have been surfaced so we can at least be clear where we disagree, even if we don’t end up agreeing. Among the claims:
A: Abstractions are sometimes useful.
B: Abstractions eventually break down and the underlying complexity needs to be understood for further usefulness.
C: The abstraction in my post is only compelling from a certain stage of development / it’s limited (but this assertion was accompanied by the statement that this doesn’t mean it was wrong or useful).
D: The abstraction in my post is unlikely to help many people.
E: The particular abstraction in my post is leaky, is limited, and can become harmful after a certain point in development.
F: That abstractions are indispensable and are needed to guide understanding even when you dip to lower layers.
G: That it’s harmful to always be trying to look at lower levels of abstraction without higher levels.
H. That not understanding lower levels means that you don’t understand very much at all.
That’s not every assertion, but it’s maybe enough to start getting some clarity. I think that Gordon, mr-hire, and myself all agree on A and B broadly, though we might disagree on where the line is drawn for each. Gordon, you write:
This sounds like the crux of the disagreement: I think no abstraction is sufficiently non-leaky that you don’t (eventually) need to understand more of the underlying complexity within the context I see this post sitting in, which is the context of what we might call cognitive, personal, psychological, or personal development (or to put it in non-standard term, the skill of being human). Unless your purpose is only to unlock a little of what you can potentially do as a human and not all of it, every abstraction is eventually a hindrance to progress, even if it is a skillful hinderance during certain phases along the path that helps you progress until it doesn’t.
I mean, I wouldn’t disagree with that in general. Though I think a lot of work is being done by the word “eventually” and “sufficiently non-leaky”. I think there are contexts where you get away without needing to go all the way deep. Most (I would think overwhelming majority) of people who code don’t know assembly and certainly don’t understand how logic gates are implemented—and this is pretty fine 99.99% of the time.
It is fair to say that the abstraction/model in my post is not nearly as good as the abstraction of high-level computer languages. That’s true. I mean, actually it breaks pretty quickly. Part 2 of this post will dive deeper. Nonetheless, I do think it’s quite useful even if one don’t read further. Gordon, I’m unclear what your stance is—you first state that it’s useful and then state that it’s unlikely to help many people, so I’m not sure of your actual thought.
I do disagree with C (compelling only from a certain stage of development) in that I think even once you have much deeper understanding, the higher levels of abstraction remain crucially important. Just because you understand electromagnetism really well and know the limits of conventional circuit theory (e.g. designing super duper tiny transistors), doesn’t mean you want to throw out circuit theory and just solve Maxwell’s equations everywhere—even if eventually sometimes you have to.
I don’t disagree that you need more detail for a lot of applications. As mentioned, this unfortunately couldn’t make it into the first single post. As I wrote:
Saying that motivation is a matter of winning in the moment is all very good, but how does one actually do that?
Unfortunately, a proper treatment of this not-so-small topic will make this past far too long and instead requires its own post (Motivation Part 2: How to Win, coming soon to a screen near you!). Nonetheless, I can offer a high-level summary here:
But again, I don’t think what was presented here stops being compelling later on.
I also think D (unlikely to help many people) is somewhat false, depending on what counts as “many people”. Another commenter felt this post was quite useful, someone else on FB found it rather revelationary, and I’d infer from those who I know of that several more benefited even if I don’t know of it directly. That’s beyond the inside view that abstraction/model presented can be applied already. mr-hire also states simpler ideas worked well for a really long time (though I’m not sure which simpler ideas or what counts as “brute force”.
Back to B (abstractions break down, eventually become a hindrance). Definitely agree here.
I think if your initial comment, Gordon, had been something like:
I think this model/abstraction is correct and useful to an extent, but I want to flag that it is just a very high-level abstraction which is missing a lot of the very messy detail that is relevant (and quite necessary) detail for this domain. It’ll help you on it’s own at a certain stage, but after that you’ll need more.
Then I wouldn’t have disagreed at all. I think the disagreement might mostly be around a) how quickly abstractions break down, b) how much you still need them even once you understand.
To be honest, I did bristle at some of the way things were phrased, but that’s on me. It felt like there was some kind of implication that I personally didn’t have any deeper understanding and that felt.
That is, what’s going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, “where do the beliefs that power motivations come from?” and “why do you even prefer one thing to another?“) you’ll see it explode apart in a way that will make you go “oh, I had it right, but I didn’t really understand it before” the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say “oh, I’m amazed I even had such good understanding before given how little I really understood”.
This felt a like sleight since I think the post references much more detailed resources and even flags in the opening lines that what’s presented is the “crudest simplification”. Yet quite possibly you were addressing the hypothetical reader rather than me. But even if not, still shouldn’t let that influence my response too much. The additionals words of caution about the limitation of abstractions (especially here) are worthwhile. I regret perhaps that because of these very long comments, readers might not see this point and our overall agreement on it.
Hmm, I do think the thing I haven’t addressed here is more on my stance that better abstractions and better understanding of abstractions (something I think is neglected in the domain of self-understand and self-improvement) is actually key to using lower-level understanding more systematically and in a less ad-hoc way. Perhaps save that for another very long comment :P
By the way, thanks for engaging so much. Don’t think I’ve ever dived into such a detailed discussion.
Thanks! It’s encouraging words like those which keep me writing.
I’d say one of the things I attempt to do with my writing (and in my thinking too) is clarify the foundations sufficiently clearly that you can never forget them and therefore always apply them appropriately. I find that points that initially feel obvious to me are actually still a bit murky, haven’t been fully absorbed, and therefore I don’t actually use them or appreciate their many applications. Getting clear and accurate explanations (just for myself) makes them accessible enough to my mind that become a lot more useful.
A further benefit and the original reason I found myself doing this style of writing is that clear foundations allow you then to express more complicated, profound, nuanced pieces within a solid context. Being clear on the foundations, at least for me, makes all the more advanced pieces fall into place and seem much more necessary than if I had them on their own. Gives you a framework to hang things on. The make your champion stronger vs make the competition weaker is an example of this.
The overall result is a bunch of writing that doesn’t register as particularly profound, but is very clear. Or so I’ve been told.
My entry point for this is actually thinking and writing about planning. I’ve been writing something of a sequence/book on planning and any practical planning book for humans would benefit greatly from a decent treatment of motivation (and planning around it). The insights/models in this post arose naturally from that context.
With that background, I think I can explain why I disagree with many of your points.
First, I agree that you get the model right, but it’s a model that is only very compelling from a certain stage of development, my strongest evidence being it was once very compelling to me and now it’s more like the kind of understanding I would have if I was asked to manifest my understanding without explaining below a certain level of detail, and the other being I think I’ve seen a similar pattern of discovering this and then focusing on other thing in the writing of others. That doesn’t make any of it wrong or not useful, but it does suggest it’s rather limited, as I think fellow commenter Romeo also points out. That is, what’s going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, “where do the beliefs that power motivations come from?” and “why do you even prefer one thing to another?“) you’ll see it explode apart in a way that will make you go “oh, I had it right, but I didn’t really understand it before” the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say “oh, I’m amazed I even had such good understanding before given how little I really understood”.
The insights in the post exist at a certain level of abstraction, as you say, it doesn’t manifest understanding below a certain level of detail. That’s quite intentional. I disagree that it is only compelling from a certain stage of development or is rather limited. Far from it.
I’ve been aware of underlying details (just see the references) for a lot longer than I’ve appreciated the high-level general points here because I think the lower-level points easily obscure the higher-level picture. This is perhaps related to your assertion that other writers haven’t done justice to the breadth of the ideas here. If you can’t stop thinking about transistors, you will find it hard to focus on and fully appreciate the boolean algebra you’re executing on your logic gates made out of transistors. It’d be even harder to teach someone to understand boolean algebra (let’s say minimizing digital circuits) if you want them to keep transistor operation in mind at all times. And if your abstractions are tight (not leaky) enough, you actually don’t need to understand the underlying complexity for them to be useful. Transistors and logic gates are human design though. A better example might be understanding evolutionary selection processes. If you can’t abstract away from the biological implementation of sexual reproduction for a few minutes, you’re likely to miss the higher-level picture of why sexual reproduction is even a thing. What problem was it trying to solve and what does it imply for the the implementation?
In this context though. Hmm. I think the insights/models here aren’t sufficient own their own to help you manage your motivation well, and perhaps that’s your point. The original post had to split into two parts because adding in the further models needed was going to balloon out the post to six thousand words easily. If you’re point is “there isn’t enough detail here to be practically useful”, that’s kind of true.
A major hope for this piece is that if you appreciate the abstraction at this level, you will understand why all the lower-level pieces look how they look. Many people have created lists of anti-akrasia/motivation-enhancing techniques and also highly-detailed reviews of how motivation work (just see Luke’s review). A goal with this post is that you see enough of the general picture that it is clear why various motivation-techniques work and when they’re needed. If I launched into talking about Hebbian learning and prospect theory, I assert you’d probably miss the very design problem that its heart the mind/motivation system is trying to solve. Hence holding off on that lower-level for other posts.
Second, just to set expectations, it’s unfortunately unlikely that having this model will actually help many people. Yes, it will definitely help some who are ready to see it, but years of trying to explain my insights has taught me that one of the great frustrations is that fundamental insights come in a particular order, they build on each other, and the deeper you go the smaller the audience of people explaining your insights to will help.
I think I’m more optimistic than you about communicating ideas, though perhaps I’m just sufficiently early in writing “career” to be naive. I’m working on the premise that sufficiently clear explanations delivered systemically in the write order can recreate in the minds of others much of the understanding you have in your own mind. That does require people are willing to invest the time, but I think people do invest in reading writing that is sufficiently enjoyable and valuable-seeming.
You’re right. Thanks for pointing that out.
Unfortunately, I couldn’t include them in a single post of reasonable length. Temporal motivation theory / the procrastination equation will feature in the eventual Part 2 to this post.
This is a good clarification. Technically you’re right, you can maintain motivation so long as you reaffirm commitment in each moment you are tempted, which admittedly might not every moment (consider flow states).
Though I’d still argue that you should still be thinking about causing yourself to win in every moment. You might maintain motivation through to the completion of a task because either: a) you successfully reaffirmed commitment, or b) no alternative candidate winner was surfaced in a given moment to begin with, yet it matters that one of those is true for every moment of necessary execution.
Though not covered properly in this post, the eventual goal here is to explore how to engineer circumstances, both internal and external, so that you win in each moment whether it be because of a) or b). And that applies to every moment.
Over the years, I’ve experienced a couple of very dramatic yet rather sudden and relatively “easy” shifts around major pain points: strong aversions, strong fears, inner conflicts, or painful yet deeply ingrained beliefs. My post Identities are [Subconscious] Strategies contains examples. It’s not surprising to me that these are possible, but my S1 says they’re supposed to require a lot of effort: major existential crises, hours of introspection, self-discovery journeys, drug trips, or dozens of hours with a therapist.
Have recently undergone a really big one, I noted my surprise again. Surprise, of course, is a property of bad models. (Actually, the recent shift occurred precisely because of exactly this line of thought: I noticed I was surprised and dug in, leading to an important S1 shift. Your strength as a rationalist and all that.) Attempting to come up with a model which wasn’t as surprised, this is what I’ve got:
The shift involved S1 models. The S1 models had been there a long time, maybe a very long time. When that happens, they begin to seem how the world just *is*. If emotions arise from those models, and those models are so entrenched they become invisible as models, then the emotions too begin to be taken for granted—a natural way to feel about the world.
Yet the longevity of the models doesn’t mean that they’re deep, sophisticated, or well-founded. That might be very simplistic such that they ignore a lot of real-world complexity. They might have been acquired in formative years before one learned much of their epistemic skill. They haven’t been reviewed, because it was hardly noticed that they were beliefs/models rather than just “how the world is”.
Now, if you have a good dialog with your S1, if your S1 is amenable to new evidence and reasoning, then you can bring up the models in question and discuss them with your S1. If your S1 is healthy (and is not being entangled with threats), it will be open to new evidence. It might very readily update in the face of that evidence. “Oh, obviously the thing I’ve been thinking was simplistic and/or mistaken. That evidence is incompatible with the position I’ve been holding.” If the models shift, then the feelings shift.
Poor models held by an epistemically healthy “agent” can rapidly change when presented with the right evidence. This is perhaps not surprising.
Actually, I suspect that difficulty updating often comes from the S1 models and instances of the broccoli error: “If I updated to like broccoli then I would like broccoli, but I don’t like broccoli, so I don’t want that.” “If I updated that people aren’t out to get me then I wouldn’t be vigilant, which would be bad since people are out to get me.” Then the mere attempt to persuade that broccoli is pretty good / people are benign is perceived as threatening and hence resisted.
So maybe a lot of S1 willingness to update is very dependent on S1 trusting that it is safe, that you’re not going to take away any important, protective beliefs of models.
If there are occasions where I achieve rather large shifts in my feelings from relatively little effort, maybe it is just that I’ve gotten to a point where I’m good enough at locating the S1 models/beliefs that are causing inner conflict, good enough at feeling safe messing with my S1 models, and good enough at presenting the right reasoning/evidence to S1.
Thanks, really appreciate that! Both the fixes and your thoughts. :) Updates me towards it being worthwhile to post things with lower levels of polish rather than not at all.
Good post! Thanks for writing this, it adds additional clarity and generality to things I’ve been thinking about recently.
I’ve been at a workshop and haven’t had much chance to engage with this post. Thanks for writing it, it’s an excellent reply and says many things better than I managed to. I especially like hierarchy which swings between nurture and combat, that seem well described to me. Also strong endorsement for meeting conversations where they’re at.
I probably didn’t emphasize this enough in the main post, but the idea I’m really going for is that there is difference in optimizing for stories vs. optimizing for reality. There’s a difference in goal and intention. Even if it’s the case that human are never seeing “rock-bottom reality” itself and everything is mediated through experience, there is still a big difference between a) someone attempting to change an aspect of the underlying reality such that actual different things happen in the world, and b) someone attempting to change the judgments of another person by inputting the right series of bits into them.
Optimizing stories is really about a mono-focus on optimizing the specific corners of reality which exists inside human heads.
Oh, right. Once upon a time I knew that was the word. Thanks.
I didn’t know that was the word for excuse, but I think it’s an excellent word itself to use for rationalization. No synonym required. ״רצה״ is the root for “want” and “הַתְרָצָה” is the the reflexive conjugation, so it’s approximately “self-wanting.” Which is exactly what rationalization is—reasoning towards what you want to be true.