C’mon guys, Deliberate Practice is Real
I’m writing a more in-depth review of the State of Feedbackloop Rationality. (I’ve written a short review on the original post)
But I feel like a lot of people have some kind of skepticism that isn’t really addressed by “well, after 6 months of work spread out over 1.5 years, I’ve made… some progress, which feels promising enough to me to keep going but is not overwhelming evidence if you’re skeptical.”
So, this post is sort of a rant about that.
A lot of people have default skepticism about “the feedbackloop rationality paradigm” because, well, every rationality paradigm so far has come with big dreams and underdelivered. I, too, have big dreams, and I am probably not going to achieve them as much as I want. That’s not a crux[1] for me.
A lot of people also have some notion that “building multiple feedback loops” is a kind of elaborate meta that puts your head in the clouds. Dudes, the whole point of multiple feedback loops is to put you in direct contact with reality in as many ways as practical.
The thing that drives me nuts is, is that the basic premise here is:
If you put in a lot of effortful focused, practice...
...on the edge of your ability
...with careful attention to immediate feedback
...and periodic careful attention to “is this process helping my longterm goals?”
...you will get better at thinking.
(And, if a lot of people systematically do that and we track what works for them, we can get from the domain of Purposeful Practice to Deliberate Practice[2])
This seems so reasonable. This really is not promising a free lunch.
(I should clarify: I am not saying that there is robust scientific evidence for deliberate practice working for thinking. There is not – the evidence for meta-learning is generally confused and sucks and there particularly is not evidence that deliberate purposeful practice works on openended conceptual questions. I’m saying “c’mon, what the fuck do you think will work better at improving your ability to think? a) careful, focused practice on the edge of your ability with careful attention to feedback b) just generally trying to do object level work in domains where you ~never get much feedback on whether you’re really accomplishing your ultimate goals?”)
There are a lot of followup questions that aren’t as obvious, but it seems crazy to not believe that. And if you took it seriously, I don’t think the right response is to shrug, and wait for Ray to come back with some visibly impressive results.
I am, in fact, trying to get some visibly impressive object-level shit done so that skeptical people say “okay I guess the evidence is enough to warrant a more serious look.” If maintain a lot of momentum, I might be at that point in ~a year. Figuring out how to do that without burning out is kinda hard (which is what I’m currently working on).
But, I feel like I really shouldn’t have to do this, to argue the claim: “effortful, focused practice at thinking with careful attention to feedback will pay off” and “improved thinking is important for making plans about x-risk and AI safety.”
Some people followup with “I believe Ray found something that worked for Ray, but don’t believe it’ll generalize.” Which is also a sort of fair response to generically finding out some guy is out there inventing a rationality paradigm without knowing any details. But, c’mon guys, the whole point of the Feedbackloop Rationality Paradigm is to improve your own feedbackloops with your own judgment, with a mixture of legible arguments as well as feelings in your heart.
I’m annoyed about it because, there’s a lot of people around who seem pretty smart in many ways, who I think/hope can help humanity thread the needle of fate, but seem at least somewhat metacognitively dumb sometimes in ways that seem fixable. They bounce off for various reasons. (I get this more from established professionals in the x-risk space, than from median LessWrong folk who generally seem excited for some kind of rationality training content.)
If I ask them why, they mention some arguments that seems like, sure, a reasonable thing to think about (i.e. “if you focus on feedback you might Goodhart yourself”, or “I don’t have hours and hours to spend doing toy exercises.”). But they don’t spend 5 minutes thinking about how to address those problems, which IMO seem totally possible to address.
I know a small number of people who seem to have actually constructed some kind of practice regimen for themselves (which might look very different from mine). This rant is not directed at them. (I do think those people often could use more deliberate attention on what sort of practice they do, and how often they do it. But, I’m not mad about it)
Is the evidence for this ironclad? No. But neither is the evidence that just focusing on your object level work is going to be enough to steer towards the most important questions and approaches in time. And… I just roll to disbelieve that your true objection is that you don’t believe in purposeful practice.
FAQ on some things that seem reasonable to be skeptical about
I don’t think practicing “better thinking” will help me. I think I should focus on domain-specific skills.
For many people – yep, that sounds correct. Literally the first post in the Feedbackloop rationality sequence is Rationality !== Winning, where I note that even if rationality can help you with your problems, if you have pretty ordinary problems, it’s not obviously the right tool for the job.
I’m focused on “think that helps you solve confusing problems, for which there is no established playbook.”
With AI safety in particular, I think you should be worried that the job is confusing, such that it’s not obvious what the right tool for the job is. We’ve never demonstrably stopped an unfriendly AI FOOM or a slow-rolling loss of power before. I think you should have a lot of uncertainty about what the right approaches are.
We know how to mentor people into specific research paradigms, but AFAICT humanity doesn’t have much systematized training on how to figure out which research field (if any) is coming at things from the right angle.
I’m particularly worried that a lot of people are just following a gradient of learning ML, assuming the ML research paradigm is basically the right lens, without cultivating the ability to form good taste on whether that’s true. (And, for that matter, that other people are following a gradient of “learn whatever math happened to be involved with existing agent foundations research and try to get better at that.”)
But, you can’t practice the most important parts of thinking, for preparadigmatic research. Focusing on feedback loops will lead you to goodhart.
I think if you sit and think about it, with an attitude of “how can I solve this problem?”, rather than “this seems obviously hopeless”, it is really not that hard to z
There’s a cluster of skills necessary for tackling a lot of “real x-risk work”, or really, any kind of pursuing a difficult, confusing goal. Some examples:
Noticing you are confused
Noticing you are still confused
Sitting with the discomfort of being confused and also not sure if your project is on the right track for a long time.
Generating situation-appropriate-strategies for dealing with that confusion and tractionlessness.
Cultivating open curiosity
Finding questions worth asking.
Noticing that your strategy was subtly wrong, and figuring out a way to make it work.
Grieving for the fact that your current project was fundamentally flawed and you wasted a lot of time, and moving on
Noticing you are missing a skill, or knowledge, and figuring out how to quickly gain that skill/knowledge.
Noticing when your plans are resting on shaky assumptions
Validating those assumptions early, if you can.
Developing a calibrated sense of when you’re meandering thought process is going somewhere valuable, vs when you’re off track.
I think all of those are trainable skills. Yes, it’s possible to misgeneralize from toy problems to real problems, but I think the general engine of “practice on a variety of Toy Confusing Problems, and develop a sense of the failure modes you run into there, and how to account for them” just makes a pretty solid foundation for tracking the subtler or deeper versions of those problems that you run into in the wild.
Is it possible to overfit on your training? Yes, that’s why I recommend a variety of types of confusing challenges, so you can sus out which skills actually generalize.
Maybe, but this will just take so much time. Why is this going to pay off enough to be better than object level research?
This is the question that feels most reasonable to me. Purposeful Practice requires peak cognitive hours, and you don’t have that many of those.
My main answer is: Identifying skills that you need day-to-day for your most important projects, and finding an angle for practicing with focused attention for ~15 minutes a day (applied to your day job). You keep your peak hours focused on your primary goals, with a bit of time directed at “how can I push the frontier of my thinking capacity?”
It seems reasonable to spend those 15 minutes on a mix of domain-specific skills and higher level decisionmaking. I do my personal practice of a mix of “code debugging” (a naturally occurring rationality challenge) and “deciding what to do today.”
Some details from my self-review of the feedbackloop rationality post
Fluent enough for your day job.
The primary aim of my workshops and practice is to get new skills fluent enough that you can appy them to your day job, because that’s where it’s practical to deliberate practice in a way that pays for itself, rather than being an exhausting extra thing you do.
“Fluency at new skill that seem demonstrably useful” is also a large enough effect size that there’s at least something you can measure near term, to get a sense of whether the workshop is working.
Five minute versions of skills.
Relatedly: many skills have elaborate, comprehensive versions that take ~an hour to get the full value of, but you’re realisitically not going to do those most of the time. So it’s important to boil them down into something you can do in 5 minutes (or, 30 seconds).
Morning Orient Prompts.
A thing I’ve find useful for myself, and now think of as one of the primary goal of the workshop to get people to try out, is a “morning orient prompt list” that you do every day.
It’s important that it be every day, even when you don’t need it too much, so that you still have a habit of metacognition for the times you need it (but, when you don’t need it too much, it’s fine/good to do a very quick version of it)
It’s useful to have a list of explicit prompts, because that gives you an artifact that’s easier to iterate on.
Also, consider at least one extended deliberate practice push
I think ~20 hours is often enough to get “n00b gains” in a new specific skill. My personal experience is that ~40 hours has been enough to take a place where I had plateau’d, and push through to a demonstrably higher level.
I think it’s worth doing this at least once and maybe twice, in a continuous marathon focus, to get a visceral sense of how much work is involved, and which sorts of things actually help you-in-particular. But, this is a lot more expensive, and I don’t begrudge people who say “well, that’s more than I can commit.”
Realistically, I know I won’t stick to this sort of thing, and even if it’d work if I stick to it, in practice it’ll just sort of sputter and fail. Which doesn’t seem worth it.
Sigh, nod. This is indeed the reason that deliberate practice kinda sucks which is why it’s underinvested in which is why I think there’s alpha in it.
Generally my answer to this is to a) try to build structures that generally help people with this, b) also be investing in a Thinking Assistant ecosystem that can help people with general focus, of which “help do your serious practice.”
I have limited time so I don’t have generally open offers to help with this, but my current process is “after you’ve come to a workshop, you’re invited to Deliberate Practice club which meets ~once a month, and charges money to keep it sustainable for me + filter for seriousness”, where we both orient on “How did the last month ago? Are we on track
Okay but I bet Ray is doing some oddly specific stuff that works for Ray and don’t really think it’ll help me.
Maybe! I think people’s cognitive styles do vary.
I feel pretty happy if people read this, and go off on their own to construct their own practice regimens that work for them.
I do think I’ve got a bunch of domain expertise by now in “how to construct exercises on-the-fly that help people train skills that are relevant to them.” At my workshops, I explicitly tell people “you can do a different exercise, if a given session doesn’t feel useful to you.” A problem is that designing exercises for yourself is itself a skill, and most people end up doing something not-too-productive the first couple times they try. But I generally try to be building containers that help people put in the practice, wherever they’re currently at.
I do have a specific vibe though. I’m working on being more flexible about it. (Someone said “I kinda wish there was a Logan-y version of this workshop” and I think that’s a pretty legit thing to want in the world)
The skill ceiling here doesn’t seem that high, I’m already pretty competent, it doesn’t seem worth it.
Maybe. I don’t know you. Maybe you’ve consumed all the low-and-medium hanging fruit here.
But, idk. 10x programmers seem to exist. 10x UI designers seem to exist. When I look at the gears of how people seem to struggle with it, it seems to me pretty likely that 10x confusing-problem-navigation should be possible. There are tons of subtly wrong ways to make plans, and tons of subtly-or-not-so-subtly dumb ways of sticking with them too long.
I realize that is not that good an argument if you don’t already share my intuitions here. But I am interested in hearing from specific people about what-they-think-they-know-and-why-they-think-they-know it about how much room for improvement they think they have.
Why (exactly) won’t it work (for you, specifically)
One of my favorite frames/prompts I’ve found in the past year is “Very specifically, why is this impossible?”. And then, for each of those reasons
I’ve seen a few people respond to this with some kind of broad dismissal.
I am interested in hearing critiques from people who’ve set, like, at least a 15 minute timer to sit and ask themselves, “Okay, suppose I did want to improve at these sorts of skills, or related ones that feel more relevant to me, in a way I believed in. What concretely is hard about that? Where do I expect it to go wrong?”, and then come back with something more specific than “idk it just seems like this sort of thing won’t work.”
And then, for each of those reasons, ask “okay, why is that impossible?”
And sometimes you’re left with pieces that are actually impossible. Sometimes you are left with pieces that are merely “very hard.” And sometimes, as soon as you sit and think about it for 2 seconds, you go “oh, well, okay obviously I could do that, it’d just be slightly inconvenient.”
I would like to hear critiques from people who have spent at least some time inhabiting the “okay, if I tried to roll up my sleeves and just do this, either for myself, or for contributing somehow to an ecosystem of confusing-problem-solving-training, what actually goes wrong?”
- ^
I am holding the big dreams as my polaris because it’s helpful for cutting as quickly as possible to whatever the best outcomes turn out to be. “Shoot for the moon and… you’ll end up in low-earth orbit, which is pretty cool actually!”.
“Shoot for olympic-level training for x-risk researchers and you’ll end up with pretty solid training that gets a bunch of people more traction on the hardest parts of the problem(s).”
- ^
Contrasted with Naive Practice, Purposeful Practice involves sustained focus, on skills at the edge of your ability, with explicit goals, paying careful attention to feedback.
Deliberate practice is “Purposeful Practice that we know works.” (I claim that the current state of my paradigm is “minimum-viable-deliberate-practice”, in that it’s been through at least a couple rounds of weeding out stuff that didn’t work, and the stuff that remains has worked at least decently)
FWIW it’s not TOTALLY obvious to me that the literature supports the notion that deliberate practice applies to meta-cognitive skills at the highest level like this.
Evidence for this type of universal transfer learning is scant.
It’s clear to me from my own experience that this can be done, but if people are like “ok buddy, you SAY you’ve used focused techniques and practice to be more productive, but I think you just grew out of your ADHD” (which people HAVE said to me), I don’t think it’s fair to just say “cummon man, deliberate practice works!”
I think your second objection is actually very strong
Not because you’ll goodhart, but because people think it’s plausible that the mind just isn’t plastic on this level of basic meta-cognition. There’s lots of evidence AGAINST that, and many times that people THINK they’ve find this sort of universal transfer, it often ends up being more domain specific than they thought.
Probably the most compelling evidence that it’s possible is spiritual traditions (especially empirical ones like Buddhism) that consistently show that through a specific method, you can get deep shifts in ways of seeing and relating to everything that are consistently described the same way by many different people over space and time. But in terms of the experimental literature I don’t think there actually IS much good support for universal transfer of meta-cognition via deliberate practice.
Okay, yeah this should have been dealt with in the OP. I have thoughts about this but I did write the essay in a bit of a rush. I agree this is one of the strongest objections.
I had someone do some review of the transfer learning literature. There was nonzero stuff there seemed to demonstrably work. But mostly it just seemed like we just don’t really have good experiments on the stuff I’d have expected to work. (And, the sorts of experiments that I’d expect to work are quite expensive)
But I don’t think “universal transfer learning” is quite the phrase here.
If you learn arithmetic, you (probably) don’t get better at arbitrary other skills. But, you get to use arithmetic wherever it’s relevant. You do have to separately practice “notice places where arithmetic is relevant.” (like it may not occur to you that many problems you face are actually math problem. Or, you might need an additional skill like Fermi Estimation to turn them into math problems)
The claim here is more like: “Noticing confusion”, “having more than 1 hypothesis”, “noticing yourself flinching away from a thought that’d be inconvenient” are skills that show up in multiple domains.
My “c’mon guys” here is not “c’mon the empirical evidence here is overwhelming.” It’s more like “look, which world do you actually expect to result in you making better decisions faster: the one where you spend >0 days on testing and reflecting on your thinking in areas where there is real feedback, or the one where you just spend all your time on ‘object level work’ that doesn’t really have the ability to tell you you were wrong?”.
(and, a host of similar questions, with the meta question is “do you really expect the optimal thing here to be zero effort on metacognition practice of some kind?”)
Obviously there is a question of how much time to spend on this is optimal, and it’s definitely possible (and perhaps common) to go overboard. But I also think it’s not too hard to figure out how to navigate that.
I mostly agree in general and I feel ya on the “c’mon guys” thing, yet I don’t do my own separate “rationality practice”.
For me, it’s basically the same reason why I don’t spend much time in a weight room anymore; I prefer to keep my strength by doing things that require and use strength. I’m not against weight lifting in principle, and I’ve done a decent amount of it. It’s just that when I have a choice between “exercise muscles for the sake of exercising muscles” and “exercise muscles in the process of doing something else I want to do anyway”, the latter is a pure win if the exercise is anywhere near equivalent. Not only is it “two birds with one stone”, it also streamlines the process of making sure you’re training the right muscles for the uses you actually have, and streamlines the process of maintaining motivation with proof that it is concretely useful.
The option isn’t always available, obviously. If your object level work doesn’t have good feedback, or you’re not strong enough to do your job, then specific training absolutely makes sense. Personally though, I find more than enough opportunities to work on meta cognition as applied to actual things I am doing for object level reasons.
The thing that seems more important to me isn’t whether you’re doing a separate practice for the sake of learning, but whether you’re reflecting on your thinking in areas where there’s real feedback, and you’re noticing that feedback. I do think there’s a place for working on artificial problems, but I also think there’s an under recognized place for picking the right real world problems for your current ability level with an expectation of learning to level up. And an underappreciated skill in finding feedback on less legible problems.
I feel like the “this will just take so much time” section doesn’t really engage with the full-strength version of the critique.
When I think of people I know who have successfully gone from unremarkable to reasonably impressive via some kind of deliberate practice and training, the list consists of Nate Soares and Alex Turner. That’s it; that’s the entire list. Notably, they both followed a pretty similar path to get there (not by accident). And that path is not short.
Sure, you can do a lightweight version of “feedbackloop rationality” which just involves occasional short reviews or whatever. But that does not achieve most of the value.
My pitch to someone concerned about the timesink would instead be roughly: “Look, you know deep down that you are incompetent and don’t know what you’re doing (in the spirit of Impostor Syndrome and the Great LARP). You’re clinging to these stories about how the thing you’re currently doing happens to be useful somehow, but some part of you knows perfectly well that you’re doing your current work mainly just because you can do your current work without facing the scary fact of your own ineptitude when faced with any actually useful (and difficult) problem. The only way you will ever do something of real value is if you level the fuck up. Is that going to be a big timesink and take a lot of effort? Yes. But guess what? There isn’t actually a trade-off here. Your current work is not useful, you are not going to do anything useful without getting stronger, so how about you stop hiding from your own ineptitude and just go get stronger already?”.
(And yes, that is the sort of pitch I sometimes make to myself when considering whether to dump time and effort into some form of deliberate practice.)
Probably will have a bunch more to say, but immediate question is “what’s your story about the gears for Soares/Turner?”
I don’t claim to know all the key pieces of whatever they did, but some pieces which are obvious from talking to them both and following their writings:
Both invested heavily in self-studying a bunch of technical material.
Both heavily practiced the sorts of things Joe described in his recent “fake thinking and real thinking” post. For instance, they both have a trained habit of noticing the places where most people would gloss over things (especially technical/mathematical), and instead looking for a concrete example.
Both heavily practiced the standard metacognitive moves, e.g. noticing when a mental move worked very well/poorly and reinforcing accordingly, asking “how could I have noticed that faster”, etc.
Both invested in understanding their emotional barriers and figuring out sustainable ways to handle them. (I personally think both of them have important shortcomings in that department, but they’ve at least put some effort into it and are IMO doing better than they would otherwise.)
Both have a general mindset of actually trying to notice their own shortcomings and improve, rather than make excuses or hide.
And finally: both put a pretty ridiculous amount of effort into improving, in terms of raw time and energy, and in terms of emotional investment, and in terms of “actually doing the thing for real not just talking or playing”.
Nod.
The thing I’m currently hearing you saying (either in contrast to this post, or flagging that this post doesn’t really acknowledge) is:
there’s a bunch of technical knowledge (which is a different type of thing than “metacognitive skill training”, and which also requires a ton of work to master)
the amount of work going into all of this is just, like, a ton, and the phrasing in the post (and maybe other conversations with me) doesn’t really come close to grappling with the enormity of it?
Are there other things you meant?
I think that accurately summarizes the concrete things, but misses a mood.
The missing mood is less about “grappling with the enormity of it”, and more about “grappling with the effort of it” or “accepting the need to struggle for real”. Like, in terms of time spent, we’re talking maybe a year or two of full-time effort, spread out across maybe 3-5 years. That’s far more than the post grapples with, but not prohibitively enormous; it’s comparable to getting a degree. The missing mood is more about “yup, it’s gonna be hard, and I’m gonna have to buckle down and do the hard thing for reals, not try to avoid it”. The technical knowledge is part of that—like, “yup, I’m gonna have to actually for-reals learn some gnarly technical stuff, not try to avoid it”. But not just the technical study. For instance, actually for-reals noticing and admitting when I’m avoiding unpleasant truths, or when my plans won’t work and I need to change tack, has a similar feel: “yup, my plans are actually for-reals trash, I need to actually for-reals update, and I don’t yet have any idea what to do instead, and it looks hard”.
This reminds me of Justin Skycak’s thoughts on Deliberate Practice with Math Academy. His ~400 page document about skill building and pedagogy I think would be useful to you if you haven’t seen it yet.
Ah thanks. I think I might have seen that a long time ago, but when I was in a different headspace.
I maybe also want to note: The most interesting argument against “deliberate practice” as a frame I’ve read was from Common Cog, in his post Problems with Deliberate Practice.
This was the post that introduced me to the term “purposeful practice”, which is “deliberate practice when you don’t really know what you’re doing yet or how to train effectively.” I do think most of what I’m advocating for is in fact purposeful practice (but, I’m holding myself to the standard of pushing towards a deliberate practice curriculum)
He later has a post reviewing the book Accelerate Expertise, in which he advocates throwing out the “deliberate practice” paradigm, because it’s dependent on brittle skill trees of subskills that are hard to navigate if there isn’t an established literature, or if (in the case of the military in Accelerated Expertise), you find that circumstances change often enough that rebuilding the skill tree over and over isn’t practical.
But, the solution they end up with there is “throw people into simulations that are kind of intense and overwhelming, such that they are forced to figure out how to achieve a goal in a way that organically works for them.” This is actually not that different from my approach (i.e. finding confusing-challenges that are difficult enough you will need to learn to navigate confusion and creative strategy to solve, and then do reflections / “Think It Faster” exercises.
I see these two approaches as rounding out each other. While doing Toy exercises (interleaved with your day job), you can learn to notice subskills that are bottlenecking you, and focus directly on those. This is more of a guess than a claim, but I expect that trying to combine the two approaches will yield better results.
I did just that, I set a fifteen minute timer and tried to think of exercises I could do which I think would both have direct connections back to my day-job, while also improve general cognitive skills. Why? Because I want this to work—this is exciting. However it is not something that 15 minutes, or more, of focused thinking can solve—I think you’ve drastically oversold that.
In my case (* CAUTION * SAMPLE OF ONE ALERT * CAUTION * ), I’m a freelance videographer.
TL;DR—I couldn’t think of any strategies that would improve my metacognition that helped with my deficiencies in my dayjob such as marketing, but vaguely suspect that if I had a specific method for editing found footage into cogent sequences (montages) of about 1 minute, once a week, I might improve metacognitive skills that build on pattern recognition and workflow/operational management.
I think my biggest weaknesses in my dayjob have to do with anything that comes under self-promotion, generating leads, marketing, sales, and helping clients promote themselves using my video materials. I was unable to think of a single exercise which I think would improve my metacognition in any of those topics. Any exercise, I suspect would become a checklist a kind of “do X Y Z and get more likes” rather than honing ways and strategies of thinking.
So what is related to my day-job that would? I suspect that if I set myself a weekly challenge of editing a sequence from found footage that pertained to a pseudo-random topic of theme that this might possibly pay dividends in terms that generalize to metacognition. My best guess is that this should improve metacognition on two ends, firstly there is sourcing the material and thinking about the most efficient workflow, this kind of thinking applies not just to videos, but more generally organization and even has parallels in film pre-production. I can’t give you any more specifics about that.
The other end it would improve metacognition strategies is more “soft-skills” in the sense that by creating compressed sequences from divergent sources of material that may not on first blush share a theme, it is inducing cognitive strategies that allow me to see parallels, or even contrasts, and more importantly to produce a whole from divergent parts. A lot of deceptive editing is basically this from less divergent sources.
The difficulties become about not goodharting to select themes and topics for which material is easier to come by, or easier to develop workflow about, themes and topics of sequences for which it is easier to create legible narratives or emotional arcs rather than just smooshing a random bunch of images together that all seem to pertain to a broad theme.
What constitutes a theme? Or to phrase it better—what are the commonalities of themes are going to make it easier to develop metacognitive skills by means of weekly editing exercises? Is it verbs that describe actions—“racing” “beckoning” or more vague verbs like “sharing” “pleasing” “alienating”? Does the ambiguity of vague themes like “integrity” or “wisdom” lend itself to better cognitive strategies?
And finally, how do I measure the success—where does the feedback come from? Do I operate under a time constraint? Should i install a mouse tracked and key logger and see how I can get finished with the least amount of clicks—which measure will directly connect to metacognitive strategies? I don’t know and it is easier to poke holes in it than it is to find convincing reasons it would work.
If there’s anything I’ve missed or something clearly wrong about how I’m approaching this, I’d love to hear it. Like I said, finding fast feedback loops to improving metacognitive strategies so I find questions worth asking rather than being directed by idle curiosity, noticing when my plans are based on shaky assumptions, and developing a calibrated sense of when you’re meandering thought process is going somewhere valuable, vs when you’re off track”. - OMFG YES PLEASE!
The first thing that comes up when I look at this is I’m not that sure what your goals are, and I’m not sure whether the sort of thing I’m getting at in this post is an appropriate tool.
You say:
This sounds like you’re seeing the metacognition as more like a terminal goal, than an instrumental goal (which I think doesn’t necessarily make sense).
I do think metacognition is generally useful, but in an established domain like video-editing or self-promotion in a fairly understood field, there are probably object-level skills you can learn that pay off faster than metacognition. (Most of the point of metacognition there is to sift out the “good” advice from the bad).
I want to separate out...
purposefully practicing metacognition
purposefully practicing particular object level skills, such as videoediting or self-promotion (which involves figuring out what the subskills are that you can get quicker feedback on)
purposefully practice “purposeful practice”, such that you get better at identifying subskills in various (not-necessarily-metacognition-y) domains.
...as three different things that might (or might not) be the right thing for you.
Right now I can’t really tell what your goal is, so I would first just ask “what is it you are trying to achieve?” 1-3 years from now, how would you know if [whatever kind of practice you did] turned out to work? (I think it’s helpful to imagine “what would an outside observe watching a video recording see happening differently”)
It’s apparent I’ve done a terrible bad job of explaining myself here.
What is my immediate goal? To get good at general problem solving in real life, which means better aligning instrumental activities towards my terminal goals. My personal terminal goal would be to make films and music videos that are pretty and tell good stories. I could list maybe 30 metacognitive deficiencies I think I have, but that would be of no interest to anyone.
What is my 1-3 year goal? Make very high production value music videos that tell interesting stories.
I apologize I did a terrible job of expressing myself, I’ve apparently said the complete reverse, ass-backwards thing to what I meant[1]. I was looking for exercises that could help improve my metacognition, it’s not even about video editing at all. Most of the exercise would involve thinking about everything logistical to facilitate video editing: transcoding footage, thinking about to choose themes, creating workflows and thinking about “which thing do I need to do first?”. But like you said, I spent half an hour actually trying to think about how to put this into practice. And apparently I got it wrong. It’s not easy.
I just didn’t think the thinking physics text book you suggested would be particularly interesting to me or translate well to my life.
Interesting though that you say the paint point of metacognition is to sift out ‘good advice’ from the bad. I was under the impression metacognition was more generally how we strategize our thinking. Deciding what we give attention to, and even adopting framing for problems and situations rather than just letting heuristics and intuitions come to hand and that these skills apply across domains.
That being said, I’m really bad at sifting advice.
This one! What would that look like in practice? That is certainly the one that interests me.
I’m probably answering this question in the wrong way but this particular question is not helpful to me, because I can only describe the results—the end result is I make videos with higher production values that communicate better stories. What am I doing differently to eventuate that result? I dunno… magic? If I knew what I should be doing differently. I’d be doing it, wouldn’t I?
I’d like to get really good at replacing “and somehow a good thing happens” with a vivid explanation of a causal chain instead of “somehow”.
Maybe before I focus on metacognition I should get better at being understood in written communication?
Cool, that’s helpful.
This was a fine answer. “The end result is that I make videos with higher production values that communicate better stories.” (To fit my question frame, I’d say “people would observe me making music videos somehow-or-other, and then, those music videos being higher quality than they otherwise would.”)
So, it might totally be that General Problem Solving is the skill it makes sense for you to get better at, but I wouldn’t assume that from the get-go. You might instead just directly study filmmaking.
I realize this is a bit annoying given that you did make an honest attempt at the exercise I laid out (which I think is super cool and I appreciate, barely anyone does that). Before it makes sense to figure out how to develop general problem solving or metacognition, it’s important to doublecheck whether those are the appropriate tool for your goal.
So, (I mean this as an earnest question, not like a gotcha) why are you currently interested in general problem solving (as opposed to filmmaking?) Is it because general problem solving is intrinsically interesting/rewarding to you (if you could find a path to doing so?). Or because it just seemed pretty likely to be the a good step on your journey as a filmmaker? Or just because I gave a prompt to see if you could figure out a way to apply general problemsolving to your life, and there was at least some appeal to that?
Also: My actual background / college degree was in filmmaking so I have at least some context on that.
Absolutely not. I cannot stress this enough.
Edit: I just saw your other comment that you studied filmmaking in college, so please excuse the over-explaining in this comment stuff that is no doubt oversimplified to you. Although I will state that there is no easier time to make films than in filmschool where classmates and other members of your cohort provide cast and crew, and the school provides facilities and equipment removing many of the logistical hurdles I enumerate.
More so the last one, I’m bad at general problem solving, I’m also very messy and disorganized because I can’t find the right “place” for things which suggests I’m very bad at predicting my own future self in such a way that I can place objects (and notes for that matter) in assigned spaces which will be easy and obvious for me to recall later.
That being said my only interest, my single minded terminal goal is to tell good visual stories but to quote Orson Welles “filmmaking is 2% filmmaking 98% hustling”. I’m not a hustler. The logistical and financial problem solving that facilitate the storytelling/filmmaking are things I am absolutely terrible at. So much of filmmaking is figuring out logistics, time management, practical problem solving that has little or nothing to do with the aesthetic intentions. The other half is the sociological component but that seems less relevant to metacgonition.
A poet friend of mine describes the tremendous difference between when she wants to create—she picks up a pen and paper. And a filmmaker who needs to move heaven and earth.
Music videos in fact simplify a lot of the logistical problems of filmmaking because they are shorter, there’s less of an onus to persuade and pitch an idea, since the band already are invested emotionally (and financially) in having a video made. You just need to help them get their story across, not sell them on your own story. However that still requires getting commisions, marketing, and presents it’s own logistical challenges owing to shorter turnarounds.
The simple fact is I’m not a schmoozer or a networker—whether you want to make films or music videos, you need someone to give you the opportunity (usually that means finances, but not necessarily). That’s the first hurdle. The second hurdle is that you can have a great idea for a music video, can storyboard it, it can all make sense in aesthetic terms but the logistics of making it happen are another thing entirely. You can have something that makes sense as story, but making it requires broad problem solving skills… more so when you don’t have finances.
Now assuming that a musician or band does commission me for a music video, they’ve agreed to a pitch, which happens with more and more frequency as my reputation has grown over 5 years of doing this—now what?
Firstly you need a space to film this music video. Then you need to consider, with musicians you often need to find a time when they can all take time off of work and doesn’t impinge on their music-making. Now you find yourself trying to contort the logistics into a window of time that allows you to bump in and out of several locations, set up camera, lights, change costumes and makeup, maintain continuity (although less so an issue in music videos). I find myself writing gantt charts and estimating “turnarounds” and finding the most expedient order to put things in.
The space to film needs to be appropriate aesthetically, it needs to add to the story, the larger the better. It needs the right lighting, that involves a whole host of considerations beyond the aesthetics of lighting and colour theory like—how many watts can we draw from the wall? If we want a diffuse light, where do we physically put the sheet or diffuser in a confined but aesthetically appropriate space? What if we’re not allowed to move certain furnishings as part of the deal with the owners of the space but it’s really ruining our shot? How do we solve that?
I could go on and on and on. Do you know how many film shoots I’ve been on where police were called? The storytelling, the shot selection, the colour palettes, the communication of gesture and intent to performers, the editing and selection of shots, the rhythm and pacing… that’s not the hard part: money and logistics are.
Many of these problems could be solved (re: outsourced) with more finances, being able to hire other people who specialize in those things. Most people say “you should get a producer” and it’s like… yeah, how do I find this magical person?
When I have a great story in my head, and you ask me “how do you do that?”—i shrug. I don’t know.
To +1 the rant, my experience across the class spectrum is that many bootstrapped successful people know this but have learned not to talk about it too much as most don’t want to hear supporting evidence for meritocracy, it would invalidate their copes.
To my younger self, I would say you’ll need to learn to ignore those who would stoke your learned helplessness to excuse their own. I was personally gaslit about important life decisions, not out of malice per se but just this sort of choice supportive bias, only to much later discover that jumping in on those decisions actually appeared on lists of advice older folks would give to younger.
Hey Ray, I just wanted to send an approval signal that is stronger than a strong upvote.
I’m in the early stages of building a deliberate AI-enhanced performance startup and my work on it is heavily based on your feedbackloop rationality research.
Originally, I was developing a personal-use-only system for regularly using strategies like purposeful practice, metastrategic brainstorming, and the think it faster exercise with significantly reduced friction. This seemed like a low-hanging fruit because most of the friction seemed to come from needing to remember to use the technique and not having actionable instructions easily available.
But then I experienced how game-changing these cognitive tools can be, I found myself repeatedly diving into the expertise acquisition literature to improve my system, and eventually, that side-project grew into my main thing.
I’m saying this because, for me personally, your work has been far more impactful than, say, reading the sequences or practising CFAR techniques without my deliberate performance architecture in place.
And I would bet a lot of money and months of my career on the belief that—with effective systems facilitating easy application of your research, compelling marketing for giving it a try, and some solid data to back up the marketing—your work will significantly increase the effectiveness of professionals working on the world’s most pressing problems.
So please keep it up.
There are people who believe in your paradigm and value your work.
You deserve to hear that.
I’m planning on gathering data on the effectiveness of feedbackloop-rationality-based interventions during the upcoming cohort of the Non-trivial Research Fellowship to which I’ve been admitted. My plans might change but I thought it would be nice to know that you’re not the only person actively working on the problem.
So here you are.
As someone who has experienced debilitating burnout before, I will provide the boring advice to be more cautious around it than you think you need to be. For this kind of thing, it’s worth seriously considering that you might be critically wrong about how much slack you really need.
Take care!
If you can develop general rationality why can’t you use it for sonething practical. Many things would either be intrinsically fun or useful to people primarily interested in AI. Fur example become rich from reading. Or excel in some sport or hobby. Maybe you think it’s impossible to do this as an individual. But then I’m skeptical of your rationality skill.
I think you totally can use rationality (that is: “intentionally choosing cognitive algorithms that perform better” for practical things, it’s just that for most practical things, “practice being better at rationality” is less useful than “practice being better at the-thing-itself.”
If you find rationality practice intrinsically rewarding (as I, and probably many people on this site do), then yeah you should do that. But, purposeful practice is particularly exhausting and effortful. I think most people aren’t doing purposeful practice because they anticipate it being exhausting and effortful and also not super paying off compared to other things they could do, and they are probably correct.
If you have chosen to invest a bunch in rationality, yes you totally should see benefits in practical things.
(someone else tagged this as a longfom review. I did originally plan to include a second half of this post that would have felt like it qualified as such, but I think the current form is ranty enough and light on “self review” enough that I don’t feel good about it taking up a slot in the Frontpage Review Widget)
For me, a strong reason why I do not see myself[1] doing deliberate practice as you (very understandably) suggest is that, on some level, the part of my mind which decides on how much motivational oomph and thus effort is put into activities just in fact does not care much about all of these abstract and long-term goals.
Deliberate practice is a lot of hard work and the part of my mind which makes decisions about such levels of mental effort just does not see the benefits. There is a way in which a system that circumvents this motivational barrier is working against my short-term goals and and it is the latter who significantly controls motivation: Thus, such a system will “just sort of sputter and fail” in such a way that, consciously, I don’t even want to think about what went wrong.
If Feedbackloop Rationality wants to move me to be more rational, it has to work with my current state of irrationality. And this includes my short-sighted motivations.
And I think you do describe a bunch of the correct solutions: Building trust between one’s short-term motivations and long-term goals. Starting with lower-effort small-scale goals where both perspectives can get a feel for what cooperation actually looks like and can learn that it can be worth the compromises. In some sense, it seems to me that once one is capable of the kind of deliberate practice that you suggest, much of this boot strapping of agentic consistency between short-term motivation and deliberate goals has already happened.
On the other hand, it might be perfectly fine if Feedbackloop Rationality requires some not-yet-teachable minimal proficiency at this which only a fraction of people already have. If Feedbackloop Rationality allows these people to improve their thinking and contribute to hard x-risk problems, that is great by itself.
To some degree, I am describing an imaginary person here. But the pattern I describe definitely exists in my thinking even if less clearly than I put it above.
I do think basically none of this makes sense if you don’t have some particular flavor of ambitious goals. If you don’t have ambitious goals that (probably) depend on leveling up a bunch, then yeah, don’t do that. (unless you intrinsically value the leveling up)
If you sort-of-kind-of have ambitious goals, but you’re not totally bought into them, or have mixed feelings about them, then you probably want to somehow resolve that (which I don’t think is a “deliberate practice” shaped problem, more like a therapy/emotional processing problem, or a “just find a fun project or better work environment that makes things more naturally motivating” problem)
First and foremost, I totally agree with your point on this sort of thing being instrumentally useful, I’m still having issues seeing how to apply it to my real life. Here are two questions that arise for me:
I’m curious about two aspects of deliberate practice that seem interconnected:
On OODA loops: I currently maintain yearly, quarterly, weekly, and daily review cycles where I plan and reflect on progress. However, I wonder if there are specific micro-skills you’re pointing to beyond this—perhaps noticing subtle emotional tells when encountering uncomfortable topics, or developing finer-grained feedback mechanisms. How does this type of systematic review practice fit into your framework for deliberate practice? Are there particular refinements or additional elements you’d recommend? Is it noticing when I’m not doing OODA?
On unlearning: While your post focuses extensively on learning practices, I’m interested in your thoughts on “unlearning”—the process of identifying and releasing ineffective patterns or beliefs. In my experience with meditation, there seems to be a distinction between intellectual understanding and emotional understanding, where sometimes what holds us back isn’t insufficient practice but rather old patterns that need to be examined and released. How do you see the relationship between building new skills and creating space for new patterns through deliberate unlearning? One of the sayings I’ve heard said is that “meditation is the process of taking intellectual understanding and turning it into emotional understanding” which I find quite interesting.